Now to detection Pneumonia we need to detect Inflammation of the lungs. In this project, you’re challenged to build an algorithm to detect a visual signal for pneumonia in medical images. Specifically, your algorithm needs to automatically locate lung opacities on chest radiographs.
stage_2_test_images - Images for testing. Total 3000 dicom images.
Data Fields
- patientId - A patientId. Each patientId corresponds to a unique image.\n
- x - the upper-left x coordinate of the bounding box.
- y - the upper-left y coordinate of the bounding box.
- width - the width of the bounding box.
- height - the height of the bounding box.
- Target - the binary Target, indicating whether this sample has evidence of pneumonia.
- class - Normal, No Lung Opacity / Not Normal, Lung Opacity
!pip install pydicom #We are going to use pydicom package for dicom images processing.
Collecting pydicom
Downloading https://files.pythonhosted.org/packages/d3/56/342e1f8ce5afe63bf65c23d0b2c1cd5a05600caad1c211c39725d3a4cc56/pydicom-2.0.0-py3-none-any.whl (35.4MB)
|████████████████████████████████| 35.5MB 1.2MB/s
Installing collected packages: pydicom
Successfully installed pydicom-2.0.0
#import necessory packages
import os
import csv
import random
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from tqdm import tqdm_notebook
from matplotlib.patches import Rectangle
import seaborn as sns
import pydicom as dcm
from skimage import io
from skimage import measure
from skimage.transform import resize
import tensorflow as tf
from tensorflow import keras
import matplotlib.patches as patches
import pydicom
%matplotlib inline
/usr/local/lib/python3.6/dist-packages/statsmodels/tools/_testing.py:19: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead. import pandas.util.testing as tm
from google.colab import drive #mount google drive
drive.mount('/content/drive')
Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&response_type=code&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fpeopleapi.readonly Enter your authorization code: ·········· Mounted at /content/drive
os.chdir('/content/drive/My Drive/Capstone My Data') #Change working directory
class_info_df = pd.read_csv('stage_2_detailed_class_info.csv') #Load csv files
train_labels_df = pd.read_csv('stage_2_train_labels.csv')
train_labels_df.shape #Print shape of train_labels_df
(30227, 6)
class_info_df.shape #Print shape of class_info_df
(30227, 2)
class_info_df.head(5) #print 5 records
| patientId | class | |
|---|---|---|
| 0 | 0004cfab-14fd-4e49-80ba-63a80b6bddd6 | No Lung Opacity / Not Normal |
| 1 | 00313ee0-9eaa-42f4-b0ab-c148ed3241cd | No Lung Opacity / Not Normal |
| 2 | 00322d4d-1c29-4943-afc9-b6754be640eb | No Lung Opacity / Not Normal |
| 3 | 003d8fa0-6bf1-40ed-b54c-ac657f8495c5 | Normal |
| 4 | 00436515-870c-4b36-a041-de91049b9ab4 | Lung Opacity |
train_labels_df.head(5) #print 5 records
| patientId | x | y | width | height | Target | |
|---|---|---|---|---|---|---|
| 0 | 0004cfab-14fd-4e49-80ba-63a80b6bddd6 | NaN | NaN | NaN | NaN | 0 |
| 1 | 00313ee0-9eaa-42f4-b0ab-c148ed3241cd | NaN | NaN | NaN | NaN | 0 |
| 2 | 00322d4d-1c29-4943-afc9-b6754be640eb | NaN | NaN | NaN | NaN | 0 |
| 3 | 003d8fa0-6bf1-40ed-b54c-ac657f8495c5 | NaN | NaN | NaN | NaN | 0 |
| 4 | 00436515-870c-4b36-a041-de91049b9ab4 | 264.0 | 152.0 | 213.0 | 379.0 | 1 |
def missing_data(data):
total = data.isnull().sum().sort_values(ascending = False)
percent = (data.isnull().sum()/data.isnull().count()*100).sort_values(ascending = False)
return np.transpose(pd.concat([total, percent], axis=1, keys=['Total', 'Percent']))
missing_data(train_labels_df[train_labels_df['Target']==0]) #Normal
| height | width | y | x | Target | patientId | |
|---|---|---|---|---|---|---|
| Total | 20672.0 | 20672.0 | 20672.0 | 20672.0 | 0.0 | 0.0 |
| Percent | 100.0 | 100.0 | 100.0 | 100.0 | 0.0 | 0.0 |
missing_data(train_labels_df[train_labels_df['Target']==1]) # Positive
| Target | height | width | y | x | patientId | |
|---|---|---|---|---|---|---|
| Total | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
| Percent | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 |
missing_data(class_info_df)
| class | patientId | |
|---|---|---|
| Total | 0.0 | 0.0 |
| Percent | 0.0 | 0.0 |
f, ax = plt.subplots(1,1, figsize=(6,4))
total = float(len(class_info_df))
sns.countplot(class_info_df['class'],order = class_info_df['class'].value_counts().index, palette='Set3')
for p in ax.patches:
height = p.get_height()
ax.text(p.get_x()+p.get_width()/2.,
height + 3,
'{:1.2f}%'.format(100*height/total),
ha="center")
plt.show()
def get_distribution(data, feature):
# Get the count for each label
label_counts = data[feature].value_counts()
# Get total number of samples
total_samples = len(data)
# Count the number of items in each class
print("Feature: {}".format(feature))
for i in range(len(label_counts)):
label = label_counts.index[i]
count = label_counts.values[i]
percent = int((count / total_samples) * 10000) / 100
print("{:<30s}: {} or {}%".format(label, count, percent))
get_distribution(class_info_df, 'class')
Feature: class No Lung Opacity / Not Normal : 11821 or 39.1% Lung Opacity : 9555 or 31.61% Normal : 8851 or 29.28%
train_class_df = train_labels_df.merge(class_info_df, left_on='patientId', right_on='patientId', how='inner')
train_class_df.head(5)
| patientId | x | y | width | height | Target | class | |
|---|---|---|---|---|---|---|---|
| 0 | 0004cfab-14fd-4e49-80ba-63a80b6bddd6 | NaN | NaN | NaN | NaN | 0 | No Lung Opacity / Not Normal |
| 1 | 00313ee0-9eaa-42f4-b0ab-c148ed3241cd | NaN | NaN | NaN | NaN | 0 | No Lung Opacity / Not Normal |
| 2 | 00322d4d-1c29-4943-afc9-b6754be640eb | NaN | NaN | NaN | NaN | 0 | No Lung Opacity / Not Normal |
| 3 | 003d8fa0-6bf1-40ed-b54c-ac657f8495c5 | NaN | NaN | NaN | NaN | 0 | Normal |
| 4 | 00436515-870c-4b36-a041-de91049b9ab4 | 264.0 | 152.0 | 213.0 | 379.0 | 1 | Lung Opacity |
fig, ax = plt.subplots(nrows=1,figsize=(12,6))
tmp = train_class_df.groupby('Target')['class'].value_counts()
df = pd.DataFrame(data={'Exams': tmp.values}, index=tmp.index).reset_index()
sns.barplot(ax=ax,x = 'Target', y='Exams',hue='class',data=df, palette='Set3')
plt.title("Class and Target")
plt.show()
target1 = train_class_df[train_class_df['Target']==1]
plt.figure()
fig, ax = plt.subplots(2,2,figsize=(12,12))
sns.distplot(target1['x'],kde=True,bins=50, color="red", ax=ax[0,0])
sns.distplot(target1['y'],kde=True,bins=50, color="blue", ax=ax[0,1])
sns.distplot(target1['width'],kde=True,bins=50, color="green", ax=ax[1,0])
sns.distplot(target1['height'],kde=True,bins=50, color="magenta", ax=ax[1,1])
locs, labels = plt.xticks()
plt.tick_params(axis='both', which='major', labelsize=12)
plt.show()
<Figure size 432x288 with 0 Axes>
image_train_path = os.listdir('stage_2_train_images')
image_test_path = os.listdir('stage_2_test_images')
print("Number of images in train set:", len(image_train_path),"\nNumber of images in test set:", len(image_test_path))
Number of images in train set: 26684 Number of images in test set: 3000
# Display image metadata
samplePatientID = list(train_class_df[:3].T.to_dict().values())[0]['patientId']
samplePatientID = samplePatientID+'.dcm'
dicom_file_path = os.path.join("stage_2_train_images/",samplePatientID)
dicom_file_dataset = dcm.read_file(dicom_file_path)
dicom_file_dataset
Dataset.file_meta ------------------------------- (0002, 0000) File Meta Information Group Length UL: 202 (0002, 0001) File Meta Information Version OB: b'\x00\x01' (0002, 0002) Media Storage SOP Class UID UI: Secondary Capture Image Storage (0002, 0003) Media Storage SOP Instance UID UI: 1.2.276.0.7230010.3.1.4.8323329.28530.1517874485.775526 (0002, 0010) Transfer Syntax UID UI: JPEG Baseline (Process 1) (0002, 0012) Implementation Class UID UI: 1.2.276.0.7230010.3.0.3.6.0 (0002, 0013) Implementation Version Name SH: 'OFFIS_DCMTK_360' ------------------------------------------------- (0008, 0005) Specific Character Set CS: 'ISO_IR 100' (0008, 0016) SOP Class UID UI: Secondary Capture Image Storage (0008, 0018) SOP Instance UID UI: 1.2.276.0.7230010.3.1.4.8323329.28530.1517874485.775526 (0008, 0020) Study Date DA: '19010101' (0008, 0030) Study Time TM: '000000.00' (0008, 0050) Accession Number SH: '' (0008, 0060) Modality CS: 'CR' (0008, 0064) Conversion Type CS: 'WSD' (0008, 0090) Referring Physician's Name PN: '' (0008, 103e) Series Description LO: 'view: PA' (0010, 0010) Patient's Name PN: '0004cfab-14fd-4e49-80ba-63a80b6bddd6' (0010, 0020) Patient ID LO: '0004cfab-14fd-4e49-80ba-63a80b6bddd6' (0010, 0030) Patient's Birth Date DA: '' (0010, 0040) Patient's Sex CS: 'F' (0010, 1010) Patient's Age AS: '51' (0018, 0015) Body Part Examined CS: 'CHEST' (0018, 5101) View Position CS: 'PA' (0020, 000d) Study Instance UID UI: 1.2.276.0.7230010.3.1.2.8323329.28530.1517874485.775525 (0020, 000e) Series Instance UID UI: 1.2.276.0.7230010.3.1.3.8323329.28530.1517874485.775524 (0020, 0010) Study ID SH: '' (0020, 0011) Series Number IS: "1" (0020, 0013) Instance Number IS: "1" (0020, 0020) Patient Orientation CS: '' (0028, 0002) Samples per Pixel US: 1 (0028, 0004) Photometric Interpretation CS: 'MONOCHROME2' (0028, 0010) Rows US: 1024 (0028, 0011) Columns US: 1024 (0028, 0030) Pixel Spacing DS: [0.14300000000000002, 0.14300000000000002] (0028, 0100) Bits Allocated US: 8 (0028, 0101) Bits Stored US: 8 (0028, 0102) High Bit US: 7 (0028, 0103) Pixel Representation US: 0 (0028, 2110) Lossy Image Compression CS: '01' (0028, 2114) Lossy Image Compression Method CS: 'ISO_10918_1' (7fe0, 0010) Pixel Data OB: Array of 142006 elements
# Show dom images
def show_dicom_images(data):
img_data = list(data.T.to_dict().values())
f, ax = plt.subplots(2,3, figsize=(16,12))
for i,data_row in enumerate(img_data):
patientImage = data_row['patientId']+'.dcm'
imagePath = os.path.join("stage_2_train_images/",patientImage)
data_row_img_data = dcm.read_file(imagePath)
age = data_row_img_data.PatientAge
sex = data_row_img_data.PatientSex
#read dcm image
data_row_img = dcm.dcmread(imagePath)
ax[i//3, i%3].imshow(data_row_img.pixel_array, cmap=plt.cm.bone)
ax[i//3, i%3].axis('off')
ax[i//3, i%3].set_title('ID: {}\nAge: {} Sex: {} Target: {}\nClass: {}\nWindow: {}:{}:{}:{}'.format(
data_row['patientId'], age, sex, data_row['Target'], data_row['class'],
data_row['x'],data_row['y'],data_row['width'],data_row['height']))
plt.show()
show_dicom_images(train_class_df[train_class_df['Target']==1].sample(6)) # for target 1
def show_dicom_images_with_boxes(data):
img_data = list(data.T.to_dict().values())
f, ax = plt.subplots(2,3, figsize=(16,12))
for i,data_row in enumerate(img_data):
patientImage = data_row['patientId']+'.dcm'
imagePath = os.path.join("stage_2_train_images/",patientImage)
data_row_img_data = dcm.read_file(imagePath)
age = data_row_img_data.PatientAge
sex = data_row_img_data.PatientSex
data_row_img = dcm.dcmread(imagePath)
ax[i//3, i%3].imshow(data_row_img.pixel_array, cmap=plt.cm.bone)
ax[i//3, i%3].axis('off')
ax[i//3, i%3].set_title('ID: {}\nAge: {} Sex: {} Target: {}\nClass: {}'.format(
data_row['patientId'], age, sex, data_row['Target'], data_row['class']))
rows = train_class_df[train_class_df['patientId']==data_row['patientId']]
box_data = list(rows.T.to_dict().values())
for j, row in enumerate(box_data):
ax[i//3, i%3].add_patch(Rectangle(xy=(row['x'], row['y']),
width=row['width'],height=row['height'],
color="red",alpha = 0.1))
plt.show()
show_dicom_images_with_boxes(train_class_df[train_class_df['Target']==1].sample(6))
show_dicom_images(train_class_df[train_class_df['Target']==0].sample(6)) # Display images with target 0
Target = 1 is associated with class: Lung Opacity.
Target = 0 are either of class: Normal or class: No Lung Opacity
There is no x,y,width & height value for records with Target 0.
There are two gaussian for x point distrubution graph. This respresent Lung opacity are in both left & righ lungs in given dataset.
Table contains [filename : pneumonia location] pairs per row.
The code below loads the table and transforms it into a dictionary.
# empty dictionary
pneumonia_locations = {}
# load table
with open(os.path.join('stage_2_train_labels.csv'), mode='r') as infile:
# open reader
reader = csv.reader(infile)
# skip header
next(reader, None)
# loop through rows
for rows in reader:
# retrieve information
filename = rows[0]
location = rows[1:5]
pneumonia = rows[5]
# if row contains pneumonia add label to dictionary
# which contains a list of pneumonia locations per filename
if pneumonia == '1':
# convert string to float to int
location = [int(float(i)) for i in location]
# save pneumonia location in dictionary
if filename in pneumonia_locations:
pneumonia_locations[filename].append(location)
else:
pneumonia_locations[filename] = [location]
train_class_df["filenames"]=train_class_df['patientId']+'.dcm' # Create one more column for file name
train_class_df.head()
| patientId | x | y | width | height | Target | class | filenames | |
|---|---|---|---|---|---|---|---|---|
| 0 | 0004cfab-14fd-4e49-80ba-63a80b6bddd6 | NaN | NaN | NaN | NaN | 0 | No Lung Opacity / Not Normal | 0004cfab-14fd-4e49-80ba-63a80b6bddd6.dcm |
| 1 | 00313ee0-9eaa-42f4-b0ab-c148ed3241cd | NaN | NaN | NaN | NaN | 0 | No Lung Opacity / Not Normal | 00313ee0-9eaa-42f4-b0ab-c148ed3241cd.dcm |
| 2 | 00322d4d-1c29-4943-afc9-b6754be640eb | NaN | NaN | NaN | NaN | 0 | No Lung Opacity / Not Normal | 00322d4d-1c29-4943-afc9-b6754be640eb.dcm |
| 3 | 003d8fa0-6bf1-40ed-b54c-ac657f8495c5 | NaN | NaN | NaN | NaN | 0 | Normal | 003d8fa0-6bf1-40ed-b54c-ac657f8495c5.dcm |
| 4 | 00436515-870c-4b36-a041-de91049b9ab4 | 264.0 | 152.0 | 213.0 | 379.0 | 1 | Lung Opacity | 00436515-870c-4b36-a041-de91049b9ab4.dcm |
POSITIVE = train_class_df[train_class_df['Target']==1]
NEGATIVE = train_class_df[train_class_df['Target']==0]
POSITIVE.shape
(16957, 8)
NEGATIVE.shape
(20672, 8)
#Use only 1.5K records per class for training.. as its taking time for training
records_for_training=1500
POSITIVE = POSITIVE[:records_for_training]
NEGATIVE = NEGATIVE[:records_for_training]
training_df=pd.concat([POSITIVE,NEGATIVE])
training_df.head()
| patientId | x | y | width | height | Target | class | filenames | |
|---|---|---|---|---|---|---|---|---|
| 4 | 00436515-870c-4b36-a041-de91049b9ab4 | 264.0 | 152.0 | 213.0 | 379.0 | 1 | Lung Opacity | 00436515-870c-4b36-a041-de91049b9ab4.dcm |
| 5 | 00436515-870c-4b36-a041-de91049b9ab4 | 264.0 | 152.0 | 213.0 | 379.0 | 1 | Lung Opacity | 00436515-870c-4b36-a041-de91049b9ab4.dcm |
| 6 | 00436515-870c-4b36-a041-de91049b9ab4 | 562.0 | 152.0 | 256.0 | 453.0 | 1 | Lung Opacity | 00436515-870c-4b36-a041-de91049b9ab4.dcm |
| 7 | 00436515-870c-4b36-a041-de91049b9ab4 | 562.0 | 152.0 | 256.0 | 453.0 | 1 | Lung Opacity | 00436515-870c-4b36-a041-de91049b9ab4.dcm |
| 10 | 00704310-78a8-4b38-8475-49f4573b2dbb | 323.0 | 577.0 | 160.0 | 104.0 | 1 | Lung Opacity | 00704310-78a8-4b38-8475-49f4573b2dbb.dcm |
filenames = list(training_df["filenames"])
random.shuffle(filenames)
# split into train and validation small set
n_valid_samples = 600
total_records_for_training=3000
train_filenames = filenames[n_valid_samples:total_records_for_training]
valid_filenames = filenames[:n_valid_samples]
len(valid_filenames)
600
len(train_filenames)
2400
The dataset is too large to fit into memory, so we need to create a generator that loads data on the fly.
The generator takes in some filenames, batch_size and other parameters.
The generator outputs a random batch of numpy images and numpy masks.
class generator(keras.utils.Sequence):
def __init__(self, folder, filenames, pneumonia_locations=None, batch_size=2, image_size=128, shuffle=True, augment=False, predict=False):
self.folder = folder
self.filenames = filenames
self.pneumonia_locations = pneumonia_locations
self.batch_size = batch_size
self.image_size = image_size
self.shuffle = shuffle
self.augment = augment
self.predict = predict
self.on_epoch_end()
def __load__(self, filename):
# load dicom file as numpy array
img = pydicom.dcmread(os.path.join(self.folder, filename)).pixel_array
# create empty mask
msk = np.zeros(img.shape)
# get filename without extension
filename = filename.split('.')[0]
# if image contains pneumonia
if filename in self.pneumonia_locations:
# loop through pneumonia
for location in self.pneumonia_locations[filename]:
# add 1's at the location of the pneumonia
x, y, w, h = location
msk[y:y+h, x:x+w] = 1
# resize both image and mask
img = resize(img, (self.image_size, self.image_size), mode='reflect')
msk = resize(msk, (self.image_size, self.image_size), mode='reflect') > 0.5
# if augment then horizontal flip half the time
if self.augment and random.random() > 0.5:
img = np.fliplr(img)
msk = np.fliplr(msk)
# add trailing channel dimension
img = np.expand_dims(img, -1)
msk = np.expand_dims(msk, -1)
return img, msk
def __loadpredict__(self, filename):
# load dicom file as numpy array
img = pydicom.dcmread(os.path.join(self.folder, filename)).pixel_array
# resize image
img = resize(img, (self.image_size, self.image_size), mode='reflect')
# add trailing channel dimension
img = np.expand_dims(img, -1)
return img
def __getitem__(self, index):
# select batch
filenames = self.filenames[index*self.batch_size:(index+1)*self.batch_size]
# predict mode: return images and filenames
if self.predict:
# load files
imgs = [self.__loadpredict__(filename) for filename in filenames]
# create numpy batch
imgs = np.array(imgs)
return imgs, filenames
# train mode: return images and masks
else:
# load files
items = [self.__load__(filename) for filename in filenames]
# unzip images and masks
imgs, msks = zip(*items)
# create numpy batch
imgs = np.array(imgs)
msks = np.array(msks)
return imgs, msks
def on_epoch_end(self):
if self.shuffle:
random.shuffle(self.filenames)
def __len__(self):
if self.predict:
# return everything
return int(np.ceil(len(self.filenames) / self.batch_size))
else:
# return full batches only
return int(len(self.filenames) / self.batch_size)
def create_downsample(channels, inputs):
x = keras.layers.BatchNormalization(momentum=0.9)(inputs)
x = keras.layers.LeakyReLU(0)(x)
x = keras.layers.Conv2D(channels, 1, padding='same', use_bias=False)(x)
x = keras.layers.MaxPool2D(2)(x)
return x
def create_resblock(channels, inputs):
x = keras.layers.BatchNormalization(momentum=0.9)(inputs)
x = keras.layers.LeakyReLU(0)(x)
x = keras.layers.Conv2D(channels, 3, padding='same', use_bias=False)(x)
x = keras.layers.BatchNormalization(momentum=0.9)(x)
x = keras.layers.LeakyReLU(0)(x)
x = keras.layers.Conv2D(channels, 3, padding='same', use_bias=False)(x)
return keras.layers.add([x, inputs])
def create_network(input_size, channels, n_blocks=2, depth=4):
# input
inputs = keras.Input(shape=(input_size, input_size, 1))
x = keras.layers.Conv2D(channels, 3, padding='same', use_bias=False)(inputs)
# residual blocks
for d in range(depth):
channels = channels * 2
x = create_downsample(channels, x)
for b in range(n_blocks):
x = create_resblock(channels, x)
# output
x = keras.layers.BatchNormalization(momentum=0.9)(x)
x = keras.layers.LeakyReLU(0)(x)
x = keras.layers.Conv2D(1, 1, activation='sigmoid')(x)
outputs = keras.layers.UpSampling2D(2**depth)(x)
model = keras.Model(inputs=inputs, outputs=outputs)
return model
# define iou or jaccard loss function
def iou_loss(y_true, y_pred):
y_true = tf.reshape(y_true, [-1])
y_pred = tf.reshape(y_pred, [-1])
intersection = tf.reduce_sum(y_true * y_pred)
score = (intersection + 1.) / (tf.reduce_sum(y_true) + tf.reduce_sum(y_pred) - intersection + 1.)
return 1 - score
# combine bce loss and iou loss
def iou_bce_loss(y_true, y_pred):
return 0.5 * keras.losses.binary_crossentropy(y_true, y_pred) + 0.5 * iou_loss(y_true, y_pred)
# mean iou as a metric
def mean_iou(y_true, y_pred):
y_pred = tf.round(y_pred)
intersect = tf.reduce_sum(y_true * y_pred, axis=[1, 2, 3])
union = tf.reduce_sum(y_true, axis=[1, 2, 3]) + tf.reduce_sum(y_pred, axis=[1, 2, 3])
smooth = tf.ones(tf.shape(intersect))
return tf.reduce_mean((intersect + smooth) / (union - intersect + smooth))
# cosine learning rate annealing
def cosine_annealing(x):
lr = 0.001
epochs = 25
return lr*(np.cos(np.pi*x/epochs)+1.)/2
learning_rate = tf.keras.callbacks.LearningRateScheduler(cosine_annealing)
# create network and compiler
model = create_network(input_size=256, channels=32, n_blocks=2, depth=4)
model.compile(optimizer='adam',
loss=iou_bce_loss,
metrics=['accuracy', mean_iou])
# create train and validation generators
folder = 'stage_2_train_images'
train_gen = generator(folder, train_filenames, pneumonia_locations, batch_size=32, image_size=256, shuffle=True, augment=True, predict=False)
valid_gen = generator(folder, valid_filenames, pneumonia_locations, batch_size=32, image_size=256, shuffle=False, predict=False)
from tensorflow.keras.callbacks import ModelCheckpoint
checkpoint_file="4000_checkpoint.hdf5"
checkpoint = ModelCheckpoint(checkpoint_file,monitor='accuracy',verbose=1,save_best_only=True,mode='max',save_freq=1)
history = model.fit_generator(train_gen, validation_data=valid_gen, callbacks=[learning_rate,checkpoint], epochs=6, workers=8, use_multiprocessing=True)
Epoch 1/6 WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 1/78 [..............................] - ETA: 0s - loss: 0.8666 - accuracy: 0.4816 - mean_iou: 0.0651WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 2/78 [..............................] - ETA: 27:05 - loss: 1.0757 - accuracy: 0.4222 - mean_iou: 0.0997WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 3/78 [>.............................] - ETA: 35:40 - loss: 0.9744 - accuracy: 0.4635 - mean_iou: 0.0997WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 4/78 [>.............................] - ETA: 39:31 - loss: 0.9097 - accuracy: 0.5661 - mean_iou: 0.0868WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 5/78 [>.............................] - ETA: 41:35 - loss: 0.8582 - accuracy: 0.6218 - mean_iou: 0.1023WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 6/78 [=>............................] - ETA: 42:43 - loss: 0.8207 - accuracy: 0.6519 - mean_iou: 0.1134WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 7/78 [=>............................] - ETA: 43:17 - loss: 0.7845 - accuracy: 0.6808 - mean_iou: 0.1382WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 8/78 [==>...........................] - ETA: 43:34 - loss: 0.7561 - accuracy: 0.7061 - mean_iou: 0.1533WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 9/78 [==>...........................] - ETA: 43:39 - loss: 0.7360 - accuracy: 0.7238 - mean_iou: 0.1619WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 10/78 [==>...........................] - ETA: 43:39 - loss: 0.7179 - accuracy: 0.7417 - mean_iou: 0.1839WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 11/78 [===>..........................] - ETA: 43:29 - loss: 0.7037 - accuracy: 0.7559 - mean_iou: 0.2011WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 12/78 [===>..........................] - ETA: 43:19 - loss: 0.6877 - accuracy: 0.7684 - mean_iou: 0.2167WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 13/78 [====>.........................] - ETA: 42:59 - loss: 0.6760 - accuracy: 0.7775 - mean_iou: 0.2323WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 14/78 [====>.........................] - ETA: 42:37 - loss: 0.6653 - accuracy: 0.7856 - mean_iou: 0.2480WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 15/78 [====>.........................] - ETA: 42:13 - loss: 0.6544 - accuracy: 0.7943 - mean_iou: 0.2657WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 16/78 [=====>........................] - ETA: 41:45 - loss: 0.6449 - accuracy: 0.8007 - mean_iou: 0.2796WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 17/78 [=====>........................] - ETA: 41:16 - loss: 0.6402 - accuracy: 0.8047 - mean_iou: 0.2797WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 18/78 [=====>........................] - ETA: 40:45 - loss: 0.6329 - accuracy: 0.8084 - mean_iou: 0.2902WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 19/78 [======>.......................] - ETA: 40:11 - loss: 0.6283 - accuracy: 0.8107 - mean_iou: 0.2988WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 20/78 [======>.......................] - ETA: 39:37 - loss: 0.6236 - accuracy: 0.8132 - mean_iou: 0.3008WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 21/78 [=======>......................] - ETA: 39:02 - loss: 0.6196 - accuracy: 0.8156 - mean_iou: 0.2990WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 22/78 [=======>......................] - ETA: 38:26 - loss: 0.6152 - accuracy: 0.8190 - mean_iou: 0.3011WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 23/78 [=======>......................] - ETA: 37:49 - loss: 0.6107 - accuracy: 0.8222 - mean_iou: 0.3071WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 24/78 [========>.....................] - ETA: 37:12 - loss: 0.6066 - accuracy: 0.8245 - mean_iou: 0.3117WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 25/78 [========>.....................] - ETA: 36:34 - loss: 0.6027 - accuracy: 0.8275 - mean_iou: 0.3165WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 26/78 [=========>....................] - ETA: 35:55 - loss: 0.5992 - accuracy: 0.8297 - mean_iou: 0.3215WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 27/78 [=========>....................] - ETA: 35:17 - loss: 0.5960 - accuracy: 0.8321 - mean_iou: 0.3246WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 28/78 [=========>....................] - ETA: 34:38 - loss: 0.5926 - accuracy: 0.8342 - mean_iou: 0.3319WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 29/78 [==========>...................] - ETA: 34:01 - loss: 0.5922 - accuracy: 0.8358 - mean_iou: 0.3343WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 30/78 [==========>...................] - ETA: 33:22 - loss: 0.5893 - accuracy: 0.8385 - mean_iou: 0.3381WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 31/78 [==========>...................] - ETA: 32:42 - loss: 0.5878 - accuracy: 0.8403 - mean_iou: 0.3406WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 32/78 [===========>..................] - ETA: 32:02 - loss: 0.5844 - accuracy: 0.8427 - mean_iou: 0.3476WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 33/78 [===========>..................] - ETA: 31:22 - loss: 0.5812 - accuracy: 0.8449 - mean_iou: 0.3542WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 34/78 [============>.................] - ETA: 30:42 - loss: 0.5774 - accuracy: 0.8471 - mean_iou: 0.3605WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 35/78 [============>.................] - ETA: 30:02 - loss: 0.5738 - accuracy: 0.8492 - mean_iou: 0.3645WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 36/78 [============>.................] - ETA: 29:22 - loss: 0.5731 - accuracy: 0.8501 - mean_iou: 0.3668WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 37/78 [=============>................] - ETA: 28:41 - loss: 0.5707 - accuracy: 0.8518 - mean_iou: 0.3681WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 38/78 [=============>................] - ETA: 28:01 - loss: 0.5696 - accuracy: 0.8529 - mean_iou: 0.3701WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 39/78 [==============>...............] - ETA: 27:19 - loss: 0.5681 - accuracy: 0.8541 - mean_iou: 0.3699WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 40/78 [==============>...............] - ETA: 26:38 - loss: 0.5656 - accuracy: 0.8554 - mean_iou: 0.3736WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 41/78 [==============>...............] - ETA: 25:57 - loss: 0.5636 - accuracy: 0.8566 - mean_iou: 0.3739WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 42/78 [===============>..............] - ETA: 25:16 - loss: 0.5611 - accuracy: 0.8575 - mean_iou: 0.3771WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 43/78 [===============>..............] - ETA: 24:36 - loss: 0.5585 - accuracy: 0.8586 - mean_iou: 0.3822WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 44/78 [===============>..............] - ETA: 23:54 - loss: 0.5558 - accuracy: 0.8597 - mean_iou: 0.3858WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 45/78 [================>.............] - ETA: 23:13 - loss: 0.5546 - accuracy: 0.8603 - mean_iou: 0.3862WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 46/78 [================>.............] - ETA: 22:32 - loss: 0.5536 - accuracy: 0.8613 - mean_iou: 0.3890WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 47/78 [=================>............] - ETA: 21:50 - loss: 0.5525 - accuracy: 0.8624 - mean_iou: 0.3897WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 48/78 [=================>............] - ETA: 21:09 - loss: 0.5521 - accuracy: 0.8634 - mean_iou: 0.3918WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 49/78 [=================>............] - ETA: 20:27 - loss: 0.5516 - accuracy: 0.8643 - mean_iou: 0.3927WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 50/78 [==================>...........] - ETA: 19:46 - loss: 0.5510 - accuracy: 0.8655 - mean_iou: 0.3933WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 51/78 [==================>...........] - ETA: 19:04 - loss: 0.5492 - accuracy: 0.8667 - mean_iou: 0.3985WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 52/78 [===================>..........] - ETA: 18:22 - loss: 0.5489 - accuracy: 0.8672 - mean_iou: 0.3987WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 53/78 [===================>..........] - ETA: 17:40 - loss: 0.5495 - accuracy: 0.8676 - mean_iou: 0.3987WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 54/78 [===================>..........] - ETA: 16:58 - loss: 0.5480 - accuracy: 0.8684 - mean_iou: 0.3998WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 55/78 [====================>.........] - ETA: 16:16 - loss: 0.5470 - accuracy: 0.8691 - mean_iou: 0.3999WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 56/78 [====================>.........] - ETA: 15:34 - loss: 0.5453 - accuracy: 0.8700 - mean_iou: 0.4010WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 57/78 [====================>.........] - ETA: 14:52 - loss: 0.5440 - accuracy: 0.8709 - mean_iou: 0.4038WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 58/78 [=====================>........] - ETA: 14:10 - loss: 0.5432 - accuracy: 0.8715 - mean_iou: 0.4032WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 59/78 [=====================>........] - ETA: 13:28 - loss: 0.5423 - accuracy: 0.8723 - mean_iou: 0.4035WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 60/78 [======================>.......] - ETA: 12:45 - loss: 0.5416 - accuracy: 0.8727 - mean_iou: 0.4043WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 61/78 [======================>.......] - ETA: 12:03 - loss: 0.5406 - accuracy: 0.8735 - mean_iou: 0.4047WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 62/78 [======================>.......] - ETA: 11:21 - loss: 0.5407 - accuracy: 0.8735 - mean_iou: 0.4044WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 63/78 [=======================>......] - ETA: 10:38 - loss: 0.5406 - accuracy: 0.8737 - mean_iou: 0.4042WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 64/78 [=======================>......] - ETA: 9:56 - loss: 0.5398 - accuracy: 0.8741 - mean_iou: 0.4044 WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 65/78 [========================>.....] - ETA: 9:13 - loss: 0.5390 - accuracy: 0.8745 - mean_iou: 0.4051WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 66/78 [========================>.....] - ETA: 8:31 - loss: 0.5386 - accuracy: 0.8749 - mean_iou: 0.4057WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 67/78 [========================>.....] - ETA: 7:48 - loss: 0.5378 - accuracy: 0.8754 - mean_iou: 0.4055WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 68/78 [=========================>....] - ETA: 7:05 - loss: 0.5371 - accuracy: 0.8759 - mean_iou: 0.4064WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 69/78 [=========================>....] - ETA: 6:23 - loss: 0.5377 - accuracy: 0.8760 - mean_iou: 0.4055WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 70/78 [=========================>....] - ETA: 5:40 - loss: 0.5366 - accuracy: 0.8766 - mean_iou: 0.4066WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 71/78 [==========================>...] - ETA: 4:57 - loss: 0.5358 - accuracy: 0.8768 - mean_iou: 0.4068WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 72/78 [==========================>...] - ETA: 4:14 - loss: 0.5352 - accuracy: 0.8771 - mean_iou: 0.4066WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 73/78 [===========================>..] - ETA: 3:32 - loss: 0.5348 - accuracy: 0.8771 - mean_iou: 0.4065WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 74/78 [===========================>..] - ETA: 2:49 - loss: 0.5347 - accuracy: 0.8773 - mean_iou: 0.4083WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 75/78 [===========================>..] - ETA: 2:07 - loss: 0.5346 - accuracy: 0.8774 - mean_iou: 0.4088WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 76/78 [============================>.] - ETA: 1:24 - loss: 0.5334 - accuracy: 0.8779 - mean_iou: 0.4105WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 77/78 [============================>.] - ETA: 42s - loss: 0.5324 - accuracy: 0.8783 - mean_iou: 0.4120 WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 78/78 [==============================] - ETA: 0s - loss: 0.5318 - accuracy: 0.8788 - mean_iou: 0.4122 WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. 78/78 [==============================] - 3584s 46s/step - loss: 0.5318 - accuracy: 0.8788 - mean_iou: 0.4122 - val_loss: 0.4704 - val_accuracy: 0.9058 - val_mean_iou: 0.4706 - lr: 0.0010 Epoch 2/6 WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 1/78 [..............................] - ETA: 0s - loss: 0.4139 - accuracy: 0.9449 - mean_iou: 0.5732WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 2/78 [..............................] - ETA: 27:32 - loss: 0.4606 - accuracy: 0.9219 - mean_iou: 0.4910WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 3/78 [>.............................] - ETA: 36:35 - loss: 0.4642 - accuracy: 0.9183 - mean_iou: 0.4922WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 4/78 [>.............................] - ETA: 40:36 - loss: 0.4729 - accuracy: 0.9154 - mean_iou: 0.4893WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 5/78 [>.............................] - ETA: 42:37 - loss: 0.4683 - accuracy: 0.9126 - mean_iou: 0.4963WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 6/78 [=>............................] - ETA: 43:42 - loss: 0.4743 - accuracy: 0.9103 - mean_iou: 0.4940WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 7/78 [=>............................] - ETA: 44:17 - loss: 0.4828 - accuracy: 0.9067 - mean_iou: 0.4910WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 8/78 [==>...........................] - ETA: 44:33 - loss: 0.4852 - accuracy: 0.9060 - mean_iou: 0.4775WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 9/78 [==>...........................] - ETA: 44:33 - loss: 0.4809 - accuracy: 0.9082 - mean_iou: 0.4862WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 10/78 [==>...........................] - ETA: 44:29 - loss: 0.4839 - accuracy: 0.9062 - mean_iou: 0.4833WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 11/78 [===>..........................] - ETA: 44:18 - loss: 0.4800 - accuracy: 0.9085 - mean_iou: 0.4912WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 12/78 [===>..........................] - ETA: 44:03 - loss: 0.4778 - accuracy: 0.9102 - mean_iou: 0.4925WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 13/78 [====>.........................] - ETA: 43:41 - loss: 0.4801 - accuracy: 0.9098 - mean_iou: 0.4882WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 14/78 [====>.........................] - ETA: 43:15 - loss: 0.4840 - accuracy: 0.9108 - mean_iou: 0.4840WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 15/78 [====>.........................] - ETA: 42:46 - loss: 0.4825 - accuracy: 0.9118 - mean_iou: 0.4878WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 16/78 [=====>........................] - ETA: 42:17 - loss: 0.4786 - accuracy: 0.9137 - mean_iou: 0.4965WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 17/78 [=====>........................] - ETA: 41:44 - loss: 0.4778 - accuracy: 0.9139 - mean_iou: 0.4927WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 18/78 [=====>........................] - ETA: 41:11 - loss: 0.4754 - accuracy: 0.9146 - mean_iou: 0.4973WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 19/78 [======>.......................] - ETA: 40:36 - loss: 0.4759 - accuracy: 0.9137 - mean_iou: 0.4958WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 20/78 [======>.......................] - ETA: 40:00 - loss: 0.4754 - accuracy: 0.9143 - mean_iou: 0.4943WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 21/78 [=======>......................] - ETA: 39:25 - loss: 0.4779 - accuracy: 0.9133 - mean_iou: 0.4924WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 22/78 [=======>......................] - ETA: 38:47 - loss: 0.4759 - accuracy: 0.9135 - mean_iou: 0.4977WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 23/78 [=======>......................] - ETA: 38:09 - loss: 0.4756 - accuracy: 0.9129 - mean_iou: 0.4951WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 24/78 [========>.....................] - ETA: 37:31 - loss: 0.4771 - accuracy: 0.9125 - mean_iou: 0.4904WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 25/78 [========>.....................] - ETA: 36:55 - loss: 0.4758 - accuracy: 0.9129 - mean_iou: 0.4848WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 26/78 [=========>....................] - ETA: 36:31 - loss: 0.4751 - accuracy: 0.9130 - mean_iou: 0.4848WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 27/78 [=========>....................] - ETA: 36:05 - loss: 0.4778 - accuracy: 0.9122 - mean_iou: 0.4841WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 28/78 [=========>....................] - ETA: 35:40 - loss: 0.4776 - accuracy: 0.9121 - mean_iou: 0.4841WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 29/78 [==========>...................] - ETA: 35:10 - loss: 0.4790 - accuracy: 0.9111 - mean_iou: 0.4817WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 30/78 [==========>...................] - ETA: 34:29 - loss: 0.4765 - accuracy: 0.9119 - mean_iou: 0.4830WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 31/78 [==========>...................] - ETA: 33:50 - loss: 0.4747 - accuracy: 0.9122 - mean_iou: 0.4854WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 32/78 [===========>..................] - ETA: 33:20 - loss: 0.4729 - accuracy: 0.9123 - mean_iou: 0.4882WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 33/78 [===========>..................] - ETA: 32:40 - loss: 0.4726 - accuracy: 0.9120 - mean_iou: 0.4878WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 34/78 [============>.................] - ETA: 31:58 - loss: 0.4733 - accuracy: 0.9116 - mean_iou: 0.4857WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 35/78 [============>.................] - ETA: 31:15 - loss: 0.4742 - accuracy: 0.9113 - mean_iou: 0.4842WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 36/78 [============>.................] - ETA: 30:33 - loss: 0.4728 - accuracy: 0.9117 - mean_iou: 0.4862WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 37/78 [=============>................] - ETA: 29:50 - loss: 0.4730 - accuracy: 0.9118 - mean_iou: 0.4857WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 38/78 [=============>................] - ETA: 29:06 - loss: 0.4739 - accuracy: 0.9119 - mean_iou: 0.4845WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 39/78 [==============>...............] - ETA: 28:22 - loss: 0.4739 - accuracy: 0.9119 - mean_iou: 0.4867WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 40/78 [==============>...............] - ETA: 27:39 - loss: 0.4722 - accuracy: 0.9128 - mean_iou: 0.4870WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 41/78 [==============>...............] - ETA: 26:55 - loss: 0.4721 - accuracy: 0.9133 - mean_iou: 0.4855WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 42/78 [===============>..............] - ETA: 26:11 - loss: 0.4706 - accuracy: 0.9141 - mean_iou: 0.4878WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 43/78 [===============>..............] - ETA: 25:28 - loss: 0.4710 - accuracy: 0.9142 - mean_iou: 0.4876WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 44/78 [===============>..............] - ETA: 24:44 - loss: 0.4720 - accuracy: 0.9139 - mean_iou: 0.4836WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 45/78 [================>.............] - ETA: 24:00 - loss: 0.4717 - accuracy: 0.9140 - mean_iou: 0.4840WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 46/78 [================>.............] - ETA: 23:16 - loss: 0.4699 - accuracy: 0.9144 - mean_iou: 0.4870WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 47/78 [=================>............] - ETA: 22:32 - loss: 0.4697 - accuracy: 0.9143 - mean_iou: 0.4881WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 48/78 [=================>............] - ETA: 21:49 - loss: 0.4716 - accuracy: 0.9136 - mean_iou: 0.4872WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 49/78 [=================>............] - ETA: 21:05 - loss: 0.4729 - accuracy: 0.9133 - mean_iou: 0.4860WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 50/78 [==================>...........] - ETA: 20:21 - loss: 0.4729 - accuracy: 0.9132 - mean_iou: 0.4848WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 51/78 [==================>...........] - ETA: 19:37 - loss: 0.4727 - accuracy: 0.9131 - mean_iou: 0.4816WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 52/78 [===================>..........] - ETA: 18:53 - loss: 0.4722 - accuracy: 0.9131 - mean_iou: 0.4805WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 53/78 [===================>..........] - ETA: 18:10 - loss: 0.4723 - accuracy: 0.9130 - mean_iou: 0.4785WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 54/78 [===================>..........] - ETA: 17:26 - loss: 0.4723 - accuracy: 0.9131 - mean_iou: 0.4772WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 55/78 [====================>.........] - ETA: 16:42 - loss: 0.4720 - accuracy: 0.9133 - mean_iou: 0.4783WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 56/78 [====================>.........] - ETA: 15:59 - loss: 0.4707 - accuracy: 0.9139 - mean_iou: 0.4818WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 57/78 [====================>.........] - ETA: 15:15 - loss: 0.4706 - accuracy: 0.9140 - mean_iou: 0.4828WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 58/78 [=====================>........] - ETA: 14:31 - loss: 0.4713 - accuracy: 0.9135 - mean_iou: 0.4829WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 59/78 [=====================>........] - ETA: 13:48 - loss: 0.4708 - accuracy: 0.9134 - mean_iou: 0.4828WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 60/78 [======================>.......] - ETA: 13:05 - loss: 0.4698 - accuracy: 0.9134 - mean_iou: 0.4847WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 61/78 [======================>.......] - ETA: 12:21 - loss: 0.4693 - accuracy: 0.9133 - mean_iou: 0.4850WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 62/78 [======================>.......] - ETA: 11:37 - loss: 0.4689 - accuracy: 0.9130 - mean_iou: 0.4850WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 63/78 [=======================>......] - ETA: 10:53 - loss: 0.4692 - accuracy: 0.9127 - mean_iou: 0.4828WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 64/78 [=======================>......] - ETA: 10:10 - loss: 0.4696 - accuracy: 0.9123 - mean_iou: 0.4821WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 65/78 [========================>.....] - ETA: 9:26 - loss: 0.4706 - accuracy: 0.9119 - mean_iou: 0.4806 WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 66/78 [========================>.....] - ETA: 8:42 - loss: 0.4706 - accuracy: 0.9117 - mean_iou: 0.4790WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 67/78 [========================>.....] - ETA: 7:59 - loss: 0.4703 - accuracy: 0.9117 - mean_iou: 0.4788WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 68/78 [=========================>....] - ETA: 7:15 - loss: 0.4696 - accuracy: 0.9119 - mean_iou: 0.4804WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 69/78 [=========================>....] - ETA: 6:31 - loss: 0.4701 - accuracy: 0.9120 - mean_iou: 0.4799WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 70/78 [=========================>....] - ETA: 5:47 - loss: 0.4694 - accuracy: 0.9123 - mean_iou: 0.4820WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 71/78 [==========================>...] - ETA: 5:03 - loss: 0.4696 - accuracy: 0.9124 - mean_iou: 0.4811WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 72/78 [==========================>...] - ETA: 4:20 - loss: 0.4696 - accuracy: 0.9124 - mean_iou: 0.4812WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 73/78 [===========================>..] - ETA: 3:36 - loss: 0.4700 - accuracy: 0.9123 - mean_iou: 0.4813WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 74/78 [===========================>..] - ETA: 2:53 - loss: 0.4698 - accuracy: 0.9123 - mean_iou: 0.4810WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 75/78 [===========================>..] - ETA: 2:09 - loss: 0.4695 - accuracy: 0.9124 - mean_iou: 0.4814WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 76/78 [============================>.] - ETA: 1:26 - loss: 0.4691 - accuracy: 0.9124 - mean_iou: 0.4814WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 77/78 [============================>.] - ETA: 43s - loss: 0.4690 - accuracy: 0.9123 - mean_iou: 0.4813 WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 78/78 [==============================] - ETA: 0s - loss: 0.4690 - accuracy: 0.9121 - mean_iou: 0.4808 WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. 78/78 [==============================] - 3646s 47s/step - loss: 0.4690 - accuracy: 0.9121 - mean_iou: 0.4808 - val_loss: 0.5787 - val_accuracy: 0.8078 - val_mean_iou: 0.2066 - lr: 9.9606e-04 Epoch 3/6 WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 1/78 [..............................] - ETA: 0s - loss: 0.4308 - accuracy: 0.9047 - mean_iou: 0.5029WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 2/78 [..............................] - ETA: 27:29 - loss: 0.4367 - accuracy: 0.9123 - mean_iou: 0.5125WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 3/78 [>.............................] - ETA: 36:11 - loss: 0.4345 - accuracy: 0.9120 - mean_iou: 0.4759WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 4/78 [>.............................] - ETA: 40:09 - loss: 0.4439 - accuracy: 0.9114 - mean_iou: 0.4574WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 5/78 [>.............................] - ETA: 42:14 - loss: 0.4524 - accuracy: 0.9102 - mean_iou: 0.4342WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 6/78 [=>............................] - ETA: 43:39 - loss: 0.4577 - accuracy: 0.9075 - mean_iou: 0.4260WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 7/78 [=>............................] - ETA: 44:18 - loss: 0.4579 - accuracy: 0.9087 - mean_iou: 0.4449WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 8/78 [==>...........................] - ETA: 44:36 - loss: 0.4619 - accuracy: 0.9086 - mean_iou: 0.4421WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 9/78 [==>...........................] - ETA: 44:38 - loss: 0.4573 - accuracy: 0.9099 - mean_iou: 0.4502WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 10/78 [==>...........................] - ETA: 44:33 - loss: 0.4611 - accuracy: 0.9098 - mean_iou: 0.4535WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 11/78 [===>..........................] - ETA: 44:19 - loss: 0.4624 - accuracy: 0.9102 - mean_iou: 0.4489WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 12/78 [===>..........................] - ETA: 44:04 - loss: 0.4587 - accuracy: 0.9112 - mean_iou: 0.4523WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 13/78 [====>.........................] - ETA: 43:41 - loss: 0.4563 - accuracy: 0.9124 - mean_iou: 0.4660WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 14/78 [====>.........................] - ETA: 43:17 - loss: 0.4559 - accuracy: 0.9132 - mean_iou: 0.4685WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 15/78 [====>.........................] - ETA: 42:51 - loss: 0.4551 - accuracy: 0.9143 - mean_iou: 0.4713WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 16/78 [=====>........................] - ETA: 42:22 - loss: 0.4563 - accuracy: 0.9144 - mean_iou: 0.4697WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 17/78 [=====>........................] - ETA: 41:50 - loss: 0.4540 - accuracy: 0.9160 - mean_iou: 0.4727WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 18/78 [=====>........................] - ETA: 41:19 - loss: 0.4543 - accuracy: 0.9162 - mean_iou: 0.4756WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 19/78 [======>.......................] - ETA: 40:45 - loss: 0.4517 - accuracy: 0.9170 - mean_iou: 0.4844WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 20/78 [======>.......................] - ETA: 40:09 - loss: 0.4536 - accuracy: 0.9168 - mean_iou: 0.4852WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 21/78 [=======>......................] - ETA: 39:33 - loss: 0.4516 - accuracy: 0.9174 - mean_iou: 0.4923WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 22/78 [=======>......................] - ETA: 38:56 - loss: 0.4478 - accuracy: 0.9185 - mean_iou: 0.5001WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 23/78 [=======>......................] - ETA: 38:19 - loss: 0.4460 - accuracy: 0.9187 - mean_iou: 0.5065WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 24/78 [========>.....................] - ETA: 37:51 - loss: 0.4448 - accuracy: 0.9192 - mean_iou: 0.5001WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 25/78 [========>.....................] - ETA: 37:13 - loss: 0.4437 - accuracy: 0.9192 - mean_iou: 0.5019WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 26/78 [=========>....................] - ETA: 36:35 - loss: 0.4419 - accuracy: 0.9195 - mean_iou: 0.5061WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 27/78 [=========>....................] - ETA: 35:55 - loss: 0.4442 - accuracy: 0.9185 - mean_iou: 0.5043WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 28/78 [=========>....................] - ETA: 35:16 - loss: 0.4426 - accuracy: 0.9190 - mean_iou: 0.5077WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 29/78 [==========>...................] - ETA: 34:36 - loss: 0.4422 - accuracy: 0.9190 - mean_iou: 0.5110WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 30/78 [==========>...................] - ETA: 33:56 - loss: 0.4416 - accuracy: 0.9191 - mean_iou: 0.5101WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 31/78 [==========>...................] - ETA: 33:15 - loss: 0.4405 - accuracy: 0.9191 - mean_iou: 0.5140WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 32/78 [===========>..................] - ETA: 32:35 - loss: 0.4403 - accuracy: 0.9192 - mean_iou: 0.5141WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 33/78 [===========>..................] - ETA: 31:54 - loss: 0.4423 - accuracy: 0.9182 - mean_iou: 0.5120WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 34/78 [============>.................] - ETA: 31:13 - loss: 0.4423 - accuracy: 0.9183 - mean_iou: 0.5128WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 35/78 [============>.................] - ETA: 30:32 - loss: 0.4437 - accuracy: 0.9179 - mean_iou: 0.5097WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 36/78 [============>.................] - ETA: 29:51 - loss: 0.4454 - accuracy: 0.9176 - mean_iou: 0.5078WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 37/78 [=============>................] - ETA: 29:10 - loss: 0.4456 - accuracy: 0.9173 - mean_iou: 0.5068WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 38/78 [=============>................] - ETA: 28:29 - loss: 0.4447 - accuracy: 0.9177 - mean_iou: 0.5094WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 39/78 [==============>...............] - ETA: 27:48 - loss: 0.4453 - accuracy: 0.9171 - mean_iou: 0.5107WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 40/78 [==============>...............] - ETA: 27:07 - loss: 0.4452 - accuracy: 0.9173 - mean_iou: 0.5100WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 41/78 [==============>...............] - ETA: 26:25 - loss: 0.4460 - accuracy: 0.9170 - mean_iou: 0.5098WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 42/78 [===============>..............] - ETA: 25:43 - loss: 0.4465 - accuracy: 0.9166 - mean_iou: 0.5067WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 43/78 [===============>..............] - ETA: 25:01 - loss: 0.4476 - accuracy: 0.9162 - mean_iou: 0.5051WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 44/78 [===============>..............] - ETA: 24:19 - loss: 0.4490 - accuracy: 0.9158 - mean_iou: 0.5038WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 45/78 [================>.............] - ETA: 23:36 - loss: 0.4492 - accuracy: 0.9158 - mean_iou: 0.5009WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 46/78 [================>.............] - ETA: 22:54 - loss: 0.4490 - accuracy: 0.9157 - mean_iou: 0.5008WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 47/78 [=================>............] - ETA: 22:12 - loss: 0.4483 - accuracy: 0.9160 - mean_iou: 0.5028WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 48/78 [=================>............] - ETA: 21:30 - loss: 0.4485 - accuracy: 0.9160 - mean_iou: 0.5034WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 49/78 [=================>............] - ETA: 20:47 - loss: 0.4481 - accuracy: 0.9163 - mean_iou: 0.5038WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 50/78 [==================>...........] - ETA: 20:06 - loss: 0.4476 - accuracy: 0.9167 - mean_iou: 0.5056WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 51/78 [==================>...........] - ETA: 19:23 - loss: 0.4493 - accuracy: 0.9165 - mean_iou: 0.5026WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 52/78 [===================>..........] - ETA: 18:41 - loss: 0.4488 - accuracy: 0.9168 - mean_iou: 0.5027WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 53/78 [===================>..........] - ETA: 17:58 - loss: 0.4491 - accuracy: 0.9168 - mean_iou: 0.5020WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 54/78 [===================>..........] - ETA: 17:15 - loss: 0.4501 - accuracy: 0.9165 - mean_iou: 0.5020WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 55/78 [====================>.........] - ETA: 16:32 - loss: 0.4508 - accuracy: 0.9163 - mean_iou: 0.5019WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 56/78 [====================>.........] - ETA: 15:49 - loss: 0.4507 - accuracy: 0.9161 - mean_iou: 0.5022WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 57/78 [====================>.........] - ETA: 15:06 - loss: 0.4504 - accuracy: 0.9160 - mean_iou: 0.5022WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 58/78 [=====================>........] - ETA: 14:23 - loss: 0.4507 - accuracy: 0.9160 - mean_iou: 0.5034WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 59/78 [=====================>........] - ETA: 13:40 - loss: 0.4505 - accuracy: 0.9160 - mean_iou: 0.5031WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 60/78 [======================>.......] - ETA: 12:57 - loss: 0.4506 - accuracy: 0.9158 - mean_iou: 0.5008WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 61/78 [======================>.......] - ETA: 12:14 - loss: 0.4505 - accuracy: 0.9157 - mean_iou: 0.5017WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 62/78 [======================>.......] - ETA: 11:31 - loss: 0.4505 - accuracy: 0.9155 - mean_iou: 0.5016WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 63/78 [=======================>......] - ETA: 10:48 - loss: 0.4498 - accuracy: 0.9156 - mean_iou: 0.5021WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 64/78 [=======================>......] - ETA: 10:05 - loss: 0.4492 - accuracy: 0.9157 - mean_iou: 0.5019WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 65/78 [========================>.....] - ETA: 9:22 - loss: 0.4489 - accuracy: 0.9157 - mean_iou: 0.5019 WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 66/78 [========================>.....] - ETA: 8:39 - loss: 0.4492 - accuracy: 0.9157 - mean_iou: 0.5026WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 67/78 [========================>.....] - ETA: 7:56 - loss: 0.4491 - accuracy: 0.9157 - mean_iou: 0.5024WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 68/78 [=========================>....] - ETA: 7:12 - loss: 0.4493 - accuracy: 0.9155 - mean_iou: 0.5020WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 69/78 [=========================>....] - ETA: 6:29 - loss: 0.4491 - accuracy: 0.9154 - mean_iou: 0.5008WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 70/78 [=========================>....] - ETA: 5:45 - loss: 0.4488 - accuracy: 0.9156 - mean_iou: 0.4999WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 71/78 [==========================>...] - ETA: 5:02 - loss: 0.4487 - accuracy: 0.9157 - mean_iou: 0.5005WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 72/78 [==========================>...] - ETA: 4:18 - loss: 0.4491 - accuracy: 0.9158 - mean_iou: 0.4993WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 73/78 [===========================>..] - ETA: 3:35 - loss: 0.4488 - accuracy: 0.9160 - mean_iou: 0.5015WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 74/78 [===========================>..] - ETA: 2:52 - loss: 0.4478 - accuracy: 0.9162 - mean_iou: 0.5035WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 75/78 [===========================>..] - ETA: 2:09 - loss: 0.4475 - accuracy: 0.9164 - mean_iou: 0.5033WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 76/78 [============================>.] - ETA: 1:26 - loss: 0.4484 - accuracy: 0.9164 - mean_iou: 0.5034WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 77/78 [============================>.] - ETA: 42s - loss: 0.4488 - accuracy: 0.9163 - mean_iou: 0.5031 WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 78/78 [==============================] - ETA: 0s - loss: 0.4485 - accuracy: 0.9164 - mean_iou: 0.5042 WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. 78/78 [==============================] - 3636s 47s/step - loss: 0.4485 - accuracy: 0.9164 - mean_iou: 0.5042 - val_loss: 0.4465 - val_accuracy: 0.9280 - val_mean_iou: 0.5568 - lr: 9.8429e-04 Epoch 4/6 WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 1/78 [..............................] - ETA: 0s - loss: 0.4138 - accuracy: 0.9200 - mean_iou: 0.5817WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 2/78 [..............................] - ETA: 28:21 - loss: 0.4127 - accuracy: 0.9305 - mean_iou: 0.5895WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 3/78 [>.............................] - ETA: 36:56 - loss: 0.4155 - accuracy: 0.9296 - mean_iou: 0.5613WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 4/78 [>.............................] - ETA: 40:56 - loss: 0.4246 - accuracy: 0.9273 - mean_iou: 0.5392WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 5/78 [>.............................] - ETA: 42:52 - loss: 0.4288 - accuracy: 0.9259 - mean_iou: 0.5367WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 6/78 [=>............................] - ETA: 44:02 - loss: 0.4215 - accuracy: 0.9275 - mean_iou: 0.5343WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 7/78 [=>............................] - ETA: 44:37 - loss: 0.4292 - accuracy: 0.9282 - mean_iou: 0.5318WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 8/78 [==>...........................] - ETA: 44:49 - loss: 0.4314 - accuracy: 0.9279 - mean_iou: 0.5224WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 9/78 [==>...........................] - ETA: 44:52 - loss: 0.4341 - accuracy: 0.9260 - mean_iou: 0.5307WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 10/78 [==>...........................] - ETA: 44:43 - loss: 0.4339 - accuracy: 0.9254 - mean_iou: 0.5269WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 11/78 [===>..........................] - ETA: 44:32 - loss: 0.4376 - accuracy: 0.9235 - mean_iou: 0.5162WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 12/78 [===>..........................] - ETA: 44:16 - loss: 0.4388 - accuracy: 0.9238 - mean_iou: 0.5188WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 13/78 [====>.........................] - ETA: 43:53 - loss: 0.4336 - accuracy: 0.9244 - mean_iou: 0.5160WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 14/78 [====>.........................] - ETA: 44:09 - loss: 0.4376 - accuracy: 0.9235 - mean_iou: 0.5053WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 15/78 [====>.........................] - ETA: 43:38 - loss: 0.4352 - accuracy: 0.9236 - mean_iou: 0.4991WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 16/78 [=====>........................] - ETA: 43:05 - loss: 0.4320 - accuracy: 0.9242 - mean_iou: 0.5051WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 17/78 [=====>........................] - ETA: 42:31 - loss: 0.4333 - accuracy: 0.9232 - mean_iou: 0.5098WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 18/78 [=====>........................] - ETA: 41:54 - loss: 0.4364 - accuracy: 0.9228 - mean_iou: 0.5039WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 19/78 [======>.......................] - ETA: 41:19 - loss: 0.4353 - accuracy: 0.9231 - mean_iou: 0.5082WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 20/78 [======>.......................] - ETA: 40:41 - loss: 0.4356 - accuracy: 0.9227 - mean_iou: 0.5074WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 21/78 [=======>......................] - ETA: 40:05 - loss: 0.4374 - accuracy: 0.9221 - mean_iou: 0.4987WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 22/78 [=======>......................] - ETA: 39:27 - loss: 0.4395 - accuracy: 0.9219 - mean_iou: 0.4967WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 23/78 [=======>......................] - ETA: 38:48 - loss: 0.4381 - accuracy: 0.9227 - mean_iou: 0.4976WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 24/78 [========>.....................] - ETA: 38:09 - loss: 0.4370 - accuracy: 0.9227 - mean_iou: 0.4993WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 25/78 [========>.....................] - ETA: 37:29 - loss: 0.4359 - accuracy: 0.9229 - mean_iou: 0.5067WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 26/78 [=========>....................] - ETA: 36:50 - loss: 0.4370 - accuracy: 0.9223 - mean_iou: 0.5084WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 27/78 [=========>....................] - ETA: 36:10 - loss: 0.4370 - accuracy: 0.9226 - mean_iou: 0.5163WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 28/78 [=========>....................] - ETA: 35:29 - loss: 0.4383 - accuracy: 0.9215 - mean_iou: 0.5157WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 29/78 [==========>...................] - ETA: 34:49 - loss: 0.4387 - accuracy: 0.9209 - mean_iou: 0.5138WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 30/78 [==========>...................] - ETA: 34:09 - loss: 0.4377 - accuracy: 0.9205 - mean_iou: 0.5139WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 31/78 [==========>...................] - ETA: 33:29 - loss: 0.4383 - accuracy: 0.9199 - mean_iou: 0.5117WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 32/78 [===========>..................] - ETA: 32:48 - loss: 0.4388 - accuracy: 0.9196 - mean_iou: 0.5121WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 33/78 [===========>..................] - ETA: 32:09 - loss: 0.4394 - accuracy: 0.9191 - mean_iou: 0.5125WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 34/78 [============>.................] - ETA: 31:29 - loss: 0.4400 - accuracy: 0.9187 - mean_iou: 0.5109WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 35/78 [============>.................] - ETA: 30:48 - loss: 0.4402 - accuracy: 0.9183 - mean_iou: 0.5106WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 36/78 [============>.................] - ETA: 30:07 - loss: 0.4402 - accuracy: 0.9181 - mean_iou: 0.5094WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 37/78 [=============>................] - ETA: 29:25 - loss: 0.4403 - accuracy: 0.9183 - mean_iou: 0.5097WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 38/78 [=============>................] - ETA: 28:43 - loss: 0.4406 - accuracy: 0.9185 - mean_iou: 0.5104WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 39/78 [==============>...............] - ETA: 28:01 - loss: 0.4412 - accuracy: 0.9187 - mean_iou: 0.5073WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 40/78 [==============>...............] - ETA: 27:19 - loss: 0.4412 - accuracy: 0.9190 - mean_iou: 0.5056WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 41/78 [==============>...............] - ETA: 26:37 - loss: 0.4419 - accuracy: 0.9187 - mean_iou: 0.5048WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 42/78 [===============>..............] - ETA: 25:54 - loss: 0.4417 - accuracy: 0.9187 - mean_iou: 0.5035WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 43/78 [===============>..............] - ETA: 25:12 - loss: 0.4411 - accuracy: 0.9189 - mean_iou: 0.5055WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 44/78 [===============>..............] - ETA: 24:29 - loss: 0.4400 - accuracy: 0.9192 - mean_iou: 0.5076WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 45/78 [================>.............] - ETA: 23:46 - loss: 0.4396 - accuracy: 0.9192 - mean_iou: 0.5094WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 46/78 [================>.............] - ETA: 23:05 - loss: 0.4392 - accuracy: 0.9192 - mean_iou: 0.5105WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 47/78 [=================>............] - ETA: 22:22 - loss: 0.4392 - accuracy: 0.9191 - mean_iou: 0.5100WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 48/78 [=================>............] - ETA: 21:39 - loss: 0.4399 - accuracy: 0.9188 - mean_iou: 0.5090WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 49/78 [=================>............] - ETA: 20:57 - loss: 0.4399 - accuracy: 0.9188 - mean_iou: 0.5086WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 50/78 [==================>...........] - ETA: 20:14 - loss: 0.4406 - accuracy: 0.9189 - mean_iou: 0.5079WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 51/78 [==================>...........] - ETA: 19:31 - loss: 0.4407 - accuracy: 0.9188 - mean_iou: 0.5067WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 52/78 [===================>..........] - ETA: 18:48 - loss: 0.4409 - accuracy: 0.9189 - mean_iou: 0.5064WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 53/78 [===================>..........] - ETA: 18:05 - loss: 0.4403 - accuracy: 0.9193 - mean_iou: 0.5094WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 54/78 [===================>..........] - ETA: 17:21 - loss: 0.4408 - accuracy: 0.9192 - mean_iou: 0.5103WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 55/78 [====================>.........] - ETA: 16:38 - loss: 0.4400 - accuracy: 0.9197 - mean_iou: 0.5133WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 56/78 [====================>.........] - ETA: 15:55 - loss: 0.4405 - accuracy: 0.9199 - mean_iou: 0.5131WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 57/78 [====================>.........] - ETA: 15:11 - loss: 0.4401 - accuracy: 0.9201 - mean_iou: 0.5145WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 58/78 [=====================>........] - ETA: 14:28 - loss: 0.4398 - accuracy: 0.9203 - mean_iou: 0.5147WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 59/78 [=====================>........] - ETA: 13:45 - loss: 0.4393 - accuracy: 0.9204 - mean_iou: 0.5141WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 60/78 [======================>.......] - ETA: 13:01 - loss: 0.4395 - accuracy: 0.9203 - mean_iou: 0.5152WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 61/78 [======================>.......] - ETA: 12:18 - loss: 0.4399 - accuracy: 0.9202 - mean_iou: 0.5149WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 62/78 [======================>.......] - ETA: 11:35 - loss: 0.4407 - accuracy: 0.9203 - mean_iou: 0.5132WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 63/78 [=======================>......] - ETA: 10:51 - loss: 0.4412 - accuracy: 0.9202 - mean_iou: 0.5134WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 64/78 [=======================>......] - ETA: 10:08 - loss: 0.4413 - accuracy: 0.9203 - mean_iou: 0.5130WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 65/78 [========================>.....] - ETA: 9:25 - loss: 0.4419 - accuracy: 0.9201 - mean_iou: 0.5118 WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 66/78 [========================>.....] - ETA: 8:41 - loss: 0.4422 - accuracy: 0.9202 - mean_iou: 0.5113WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 67/78 [========================>.....] - ETA: 7:58 - loss: 0.4431 - accuracy: 0.9200 - mean_iou: 0.5098WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 68/78 [=========================>....] - ETA: 7:14 - loss: 0.4429 - accuracy: 0.9200 - mean_iou: 0.5095WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 69/78 [=========================>....] - ETA: 6:30 - loss: 0.4437 - accuracy: 0.9198 - mean_iou: 0.5089WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 70/78 [=========================>....] - ETA: 5:47 - loss: 0.4440 - accuracy: 0.9197 - mean_iou: 0.5094WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 71/78 [==========================>...] - ETA: 5:03 - loss: 0.4437 - accuracy: 0.9198 - mean_iou: 0.5093WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 72/78 [==========================>...] - ETA: 4:19 - loss: 0.4433 - accuracy: 0.9199 - mean_iou: 0.5093WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 73/78 [===========================>..] - ETA: 3:36 - loss: 0.4429 - accuracy: 0.9200 - mean_iou: 0.5091WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 74/78 [===========================>..] - ETA: 2:52 - loss: 0.4428 - accuracy: 0.9199 - mean_iou: 0.5086WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 75/78 [===========================>..] - ETA: 2:09 - loss: 0.4431 - accuracy: 0.9197 - mean_iou: 0.5079WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 76/78 [============================>.] - ETA: 1:26 - loss: 0.4426 - accuracy: 0.9198 - mean_iou: 0.5085WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 77/78 [============================>.] - ETA: 43s - loss: 0.4427 - accuracy: 0.9196 - mean_iou: 0.5081 WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 78/78 [==============================] - ETA: 0s - loss: 0.4418 - accuracy: 0.9197 - mean_iou: 0.5098 WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. 78/78 [==============================] - 3641s 47s/step - loss: 0.4418 - accuracy: 0.9197 - mean_iou: 0.5098 - val_loss: 0.4483 - val_accuracy: 0.9212 - val_mean_iou: 0.5062 - lr: 9.6489e-04 Epoch 5/6 WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 1/78 [..............................] - ETA: 0s - loss: 0.3991 - accuracy: 0.9185 - mean_iou: 0.5028WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 2/78 [..............................] - ETA: 28:41 - loss: 0.4081 - accuracy: 0.9182 - mean_iou: 0.5235WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 3/78 [>.............................] - ETA: 37:06 - loss: 0.4184 - accuracy: 0.9203 - mean_iou: 0.5050WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 4/78 [>.............................] - ETA: 40:55 - loss: 0.4191 - accuracy: 0.9227 - mean_iou: 0.5336WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 5/78 [>.............................] - ETA: 42:58 - loss: 0.4216 - accuracy: 0.9217 - mean_iou: 0.5419WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 6/78 [=>............................] - ETA: 44:09 - loss: 0.4159 - accuracy: 0.9231 - mean_iou: 0.5470WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 7/78 [=>............................] - ETA: 44:44 - loss: 0.4164 - accuracy: 0.9222 - mean_iou: 0.5367WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 8/78 [==>...........................] - ETA: 44:57 - loss: 0.4135 - accuracy: 0.9227 - mean_iou: 0.5348WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 9/78 [==>...........................] - ETA: 45:02 - loss: 0.4157 - accuracy: 0.9238 - mean_iou: 0.5156WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 10/78 [==>...........................] - ETA: 44:54 - loss: 0.4159 - accuracy: 0.9242 - mean_iou: 0.5212WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 11/78 [===>..........................] - ETA: 44:40 - loss: 0.4188 - accuracy: 0.9248 - mean_iou: 0.5267WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 12/78 [===>..........................] - ETA: 44:20 - loss: 0.4182 - accuracy: 0.9252 - mean_iou: 0.5290WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 13/78 [====>.........................] - ETA: 43:57 - loss: 0.4172 - accuracy: 0.9260 - mean_iou: 0.5293WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 14/78 [====>.........................] - ETA: 43:32 - loss: 0.4215 - accuracy: 0.9260 - mean_iou: 0.5273WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 15/78 [====>.........................] - ETA: 43:02 - loss: 0.4246 - accuracy: 0.9250 - mean_iou: 0.5266WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 16/78 [=====>........................] - ETA: 42:32 - loss: 0.4228 - accuracy: 0.9266 - mean_iou: 0.5335WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 17/78 [=====>........................] - ETA: 42:00 - loss: 0.4261 - accuracy: 0.9260 - mean_iou: 0.5322WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 18/78 [=====>........................] - ETA: 41:28 - loss: 0.4229 - accuracy: 0.9262 - mean_iou: 0.5370WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 19/78 [======>.......................] - ETA: 40:54 - loss: 0.4239 - accuracy: 0.9265 - mean_iou: 0.5331WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 20/78 [======>.......................] - ETA: 40:18 - loss: 0.4265 - accuracy: 0.9253 - mean_iou: 0.5317WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 21/78 [=======>......................] - ETA: 39:42 - loss: 0.4289 - accuracy: 0.9250 - mean_iou: 0.5273WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 22/78 [=======>......................] - ETA: 39:05 - loss: 0.4280 - accuracy: 0.9248 - mean_iou: 0.5234WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 23/78 [=======>......................] - ETA: 38:29 - loss: 0.4264 - accuracy: 0.9248 - mean_iou: 0.5234WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 24/78 [========>.....................] - ETA: 37:52 - loss: 0.4275 - accuracy: 0.9247 - mean_iou: 0.5193WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 25/78 [========>.....................] - ETA: 37:14 - loss: 0.4277 - accuracy: 0.9240 - mean_iou: 0.5181WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 26/78 [=========>....................] - ETA: 36:36 - loss: 0.4253 - accuracy: 0.9246 - mean_iou: 0.5212WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 27/78 [=========>....................] - ETA: 35:59 - loss: 0.4237 - accuracy: 0.9246 - mean_iou: 0.5249WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 28/78 [=========>....................] - ETA: 35:20 - loss: 0.4252 - accuracy: 0.9241 - mean_iou: 0.5195WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 29/78 [==========>...................] - ETA: 34:40 - loss: 0.4257 - accuracy: 0.9238 - mean_iou: 0.5157WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 30/78 [==========>...................] - ETA: 34:01 - loss: 0.4239 - accuracy: 0.9243 - mean_iou: 0.5222WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 31/78 [==========>...................] - ETA: 33:21 - loss: 0.4228 - accuracy: 0.9244 - mean_iou: 0.5202WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 32/78 [===========>..................] - ETA: 32:41 - loss: 0.4276 - accuracy: 0.9231 - mean_iou: 0.5164WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 33/78 [===========>..................] - ETA: 32:00 - loss: 0.4266 - accuracy: 0.9233 - mean_iou: 0.5185WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 34/78 [============>.................] - ETA: 31:19 - loss: 0.4267 - accuracy: 0.9232 - mean_iou: 0.5213WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 35/78 [============>.................] - ETA: 30:38 - loss: 0.4261 - accuracy: 0.9235 - mean_iou: 0.5217WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 36/78 [============>.................] - ETA: 29:57 - loss: 0.4264 - accuracy: 0.9234 - mean_iou: 0.5208WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 37/78 [=============>................] - ETA: 29:16 - loss: 0.4249 - accuracy: 0.9239 - mean_iou: 0.5236WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 38/78 [=============>................] - ETA: 28:34 - loss: 0.4246 - accuracy: 0.9240 - mean_iou: 0.5235WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 39/78 [==============>...............] - ETA: 27:52 - loss: 0.4255 - accuracy: 0.9240 - mean_iou: 0.5231WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 40/78 [==============>...............] - ETA: 27:10 - loss: 0.4263 - accuracy: 0.9241 - mean_iou: 0.5238WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 41/78 [==============>...............] - ETA: 26:29 - loss: 0.4265 - accuracy: 0.9242 - mean_iou: 0.5250WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 42/78 [===============>..............] - ETA: 25:46 - loss: 0.4260 - accuracy: 0.9244 - mean_iou: 0.5249WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 43/78 [===============>..............] - ETA: 25:04 - loss: 0.4260 - accuracy: 0.9247 - mean_iou: 0.5237WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 44/78 [===============>..............] - ETA: 24:22 - loss: 0.4258 - accuracy: 0.9247 - mean_iou: 0.5228WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 45/78 [================>.............] - ETA: 23:40 - loss: 0.4278 - accuracy: 0.9239 - mean_iou: 0.5210WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 46/78 [================>.............] - ETA: 22:58 - loss: 0.4275 - accuracy: 0.9239 - mean_iou: 0.5224WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 47/78 [=================>............] - ETA: 22:15 - loss: 0.4279 - accuracy: 0.9238 - mean_iou: 0.5210WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 48/78 [=================>............] - ETA: 21:32 - loss: 0.4270 - accuracy: 0.9238 - mean_iou: 0.5226WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 49/78 [=================>............] - ETA: 20:50 - loss: 0.4263 - accuracy: 0.9238 - mean_iou: 0.5242WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 50/78 [==================>...........] - ETA: 20:07 - loss: 0.4262 - accuracy: 0.9238 - mean_iou: 0.5237WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 51/78 [==================>...........] - ETA: 19:25 - loss: 0.4274 - accuracy: 0.9232 - mean_iou: 0.5214WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 52/78 [===================>..........] - ETA: 18:42 - loss: 0.4271 - accuracy: 0.9230 - mean_iou: 0.5211WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 53/78 [===================>..........] - ETA: 17:59 - loss: 0.4264 - accuracy: 0.9229 - mean_iou: 0.5214WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 54/78 [===================>..........] - ETA: 17:16 - loss: 0.4263 - accuracy: 0.9228 - mean_iou: 0.5218WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 55/78 [====================>.........] - ETA: 16:33 - loss: 0.4260 - accuracy: 0.9229 - mean_iou: 0.5205WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 56/78 [====================>.........] - ETA: 15:50 - loss: 0.4260 - accuracy: 0.9230 - mean_iou: 0.5213WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 57/78 [====================>.........] - ETA: 15:07 - loss: 0.4258 - accuracy: 0.9229 - mean_iou: 0.5215WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 58/78 [=====================>........] - ETA: 14:24 - loss: 0.4250 - accuracy: 0.9231 - mean_iou: 0.5237WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 59/78 [=====================>........] - ETA: 13:41 - loss: 0.4247 - accuracy: 0.9232 - mean_iou: 0.5218WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 60/78 [======================>.......] - ETA: 12:58 - loss: 0.4245 - accuracy: 0.9233 - mean_iou: 0.5233WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 61/78 [======================>.......] - ETA: 12:15 - loss: 0.4243 - accuracy: 0.9235 - mean_iou: 0.5253WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 62/78 [======================>.......] - ETA: 11:32 - loss: 0.4233 - accuracy: 0.9240 - mean_iou: 0.5281WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 63/78 [=======================>......] - ETA: 10:49 - loss: 0.4234 - accuracy: 0.9241 - mean_iou: 0.5293WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 64/78 [=======================>......] - ETA: 10:06 - loss: 0.4227 - accuracy: 0.9243 - mean_iou: 0.5294WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 65/78 [========================>.....] - ETA: 9:23 - loss: 0.4218 - accuracy: 0.9245 - mean_iou: 0.5307 WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 66/78 [========================>.....] - ETA: 8:40 - loss: 0.4221 - accuracy: 0.9243 - mean_iou: 0.5312WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 67/78 [========================>.....] - ETA: 7:56 - loss: 0.4220 - accuracy: 0.9243 - mean_iou: 0.5315WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 68/78 [=========================>....] - ETA: 7:13 - loss: 0.4221 - accuracy: 0.9241 - mean_iou: 0.5313WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 69/78 [=========================>....] - ETA: 6:29 - loss: 0.4217 - accuracy: 0.9241 - mean_iou: 0.5322WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 70/78 [=========================>....] - ETA: 5:45 - loss: 0.4218 - accuracy: 0.9239 - mean_iou: 0.5314WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 71/78 [==========================>...] - ETA: 5:02 - loss: 0.4215 - accuracy: 0.9240 - mean_iou: 0.5321WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 72/78 [==========================>...] - ETA: 4:18 - loss: 0.4218 - accuracy: 0.9239 - mean_iou: 0.5320WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 73/78 [===========================>..] - ETA: 3:35 - loss: 0.4215 - accuracy: 0.9239 - mean_iou: 0.5314WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 74/78 [===========================>..] - ETA: 2:52 - loss: 0.4226 - accuracy: 0.9239 - mean_iou: 0.5301WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 75/78 [===========================>..] - ETA: 2:09 - loss: 0.4231 - accuracy: 0.9238 - mean_iou: 0.5297WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 76/78 [============================>.] - ETA: 1:25 - loss: 0.4235 - accuracy: 0.9237 - mean_iou: 0.5312WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 77/78 [============================>.] - ETA: 42s - loss: 0.4238 - accuracy: 0.9239 - mean_iou: 0.5307 WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 78/78 [==============================] - ETA: 0s - loss: 0.4233 - accuracy: 0.9241 - mean_iou: 0.5322 WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. 78/78 [==============================] - 3627s 47s/step - loss: 0.4233 - accuracy: 0.9241 - mean_iou: 0.5322 - val_loss: 0.4118 - val_accuracy: 0.9300 - val_mean_iou: 0.5509 - lr: 9.3815e-04 Epoch 6/6 WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 1/78 [..............................] - ETA: 0s - loss: 0.4062 - accuracy: 0.9323 - mean_iou: 0.5790WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 2/78 [..............................] - ETA: 28:45 - loss: 0.3978 - accuracy: 0.9370 - mean_iou: 0.5877WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 3/78 [>.............................] - ETA: 37:04 - loss: 0.3997 - accuracy: 0.9359 - mean_iou: 0.5492WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 4/78 [>.............................] - ETA: 41:45 - loss: 0.4152 - accuracy: 0.9300 - mean_iou: 0.5340WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 5/78 [>.............................] - ETA: 43:33 - loss: 0.4207 - accuracy: 0.9251 - mean_iou: 0.5312WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 6/78 [=>............................] - ETA: 44:32 - loss: 0.4155 - accuracy: 0.9271 - mean_iou: 0.5293WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 7/78 [=>............................] - ETA: 45:01 - loss: 0.4110 - accuracy: 0.9275 - mean_iou: 0.5282WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 8/78 [==>...........................] - ETA: 45:12 - loss: 0.4111 - accuracy: 0.9282 - mean_iou: 0.5368WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 9/78 [==>...........................] - ETA: 45:12 - loss: 0.4129 - accuracy: 0.9267 - mean_iou: 0.5329WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 10/78 [==>...........................] - ETA: 45:05 - loss: 0.4157 - accuracy: 0.9262 - mean_iou: 0.5456WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 11/78 [===>..........................] - ETA: 44:52 - loss: 0.4192 - accuracy: 0.9254 - mean_iou: 0.5395WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 12/78 [===>..........................] - ETA: 44:32 - loss: 0.4235 - accuracy: 0.9248 - mean_iou: 0.5284WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 13/78 [====>.........................] - ETA: 44:12 - loss: 0.4230 - accuracy: 0.9252 - mean_iou: 0.5324WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 14/78 [====>.........................] - ETA: 43:46 - loss: 0.4284 - accuracy: 0.9237 - mean_iou: 0.5205WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 15/78 [====>.........................] - ETA: 43:17 - loss: 0.4291 - accuracy: 0.9238 - mean_iou: 0.5186WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 16/78 [=====>........................] - ETA: 42:47 - loss: 0.4250 - accuracy: 0.9249 - mean_iou: 0.5235WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 17/78 [=====>........................] - ETA: 42:15 - loss: 0.4255 - accuracy: 0.9249 - mean_iou: 0.5261WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 18/78 [=====>........................] - ETA: 41:40 - loss: 0.4222 - accuracy: 0.9259 - mean_iou: 0.5320WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 19/78 [======>.......................] - ETA: 41:05 - loss: 0.4220 - accuracy: 0.9254 - mean_iou: 0.5327WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 20/78 [======>.......................] - ETA: 40:28 - loss: 0.4231 - accuracy: 0.9246 - mean_iou: 0.5281WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 21/78 [=======>......................] - ETA: 39:52 - loss: 0.4230 - accuracy: 0.9249 - mean_iou: 0.5271WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 22/78 [=======>......................] - ETA: 39:15 - loss: 0.4249 - accuracy: 0.9245 - mean_iou: 0.5253WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 23/78 [=======>......................] - ETA: 38:37 - loss: 0.4237 - accuracy: 0.9248 - mean_iou: 0.5261WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 24/78 [========>.....................] - ETA: 37:58 - loss: 0.4251 - accuracy: 0.9241 - mean_iou: 0.5275WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 25/78 [========>.....................] - ETA: 37:19 - loss: 0.4254 - accuracy: 0.9236 - mean_iou: 0.5251WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 26/78 [=========>....................] - ETA: 36:40 - loss: 0.4268 - accuracy: 0.9232 - mean_iou: 0.5241WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 27/78 [=========>....................] - ETA: 36:00 - loss: 0.4247 - accuracy: 0.9237 - mean_iou: 0.5295WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 28/78 [=========>....................] - ETA: 35:20 - loss: 0.4249 - accuracy: 0.9240 - mean_iou: 0.5299WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 29/78 [==========>...................] - ETA: 34:40 - loss: 0.4253 - accuracy: 0.9238 - mean_iou: 0.5262WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 30/78 [==========>...................] - ETA: 34:00 - loss: 0.4264 - accuracy: 0.9235 - mean_iou: 0.5238WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 31/78 [==========>...................] - ETA: 33:19 - loss: 0.4245 - accuracy: 0.9238 - mean_iou: 0.5260WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 32/78 [===========>..................] - ETA: 32:38 - loss: 0.4243 - accuracy: 0.9238 - mean_iou: 0.5243WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 33/78 [===========>..................] - ETA: 31:57 - loss: 0.4238 - accuracy: 0.9237 - mean_iou: 0.5217WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 34/78 [============>.................] - ETA: 31:19 - loss: 0.4232 - accuracy: 0.9237 - mean_iou: 0.5220WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 35/78 [============>.................] - ETA: 30:38 - loss: 0.4228 - accuracy: 0.9235 - mean_iou: 0.5210WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 36/78 [============>.................] - ETA: 29:56 - loss: 0.4228 - accuracy: 0.9235 - mean_iou: 0.5212WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 37/78 [=============>................] - ETA: 29:14 - loss: 0.4237 - accuracy: 0.9235 - mean_iou: 0.5176WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 38/78 [=============>................] - ETA: 28:32 - loss: 0.4235 - accuracy: 0.9231 - mean_iou: 0.5176WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 39/78 [==============>...............] - ETA: 27:50 - loss: 0.4233 - accuracy: 0.9231 - mean_iou: 0.5182WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 40/78 [==============>...............] - ETA: 27:08 - loss: 0.4218 - accuracy: 0.9235 - mean_iou: 0.5195WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 41/78 [==============>...............] - ETA: 26:26 - loss: 0.4223 - accuracy: 0.9236 - mean_iou: 0.5180WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 42/78 [===============>..............] - ETA: 25:45 - loss: 0.4237 - accuracy: 0.9233 - mean_iou: 0.5170WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 43/78 [===============>..............] - ETA: 25:03 - loss: 0.4244 - accuracy: 0.9229 - mean_iou: 0.5148WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 44/78 [===============>..............] - ETA: 24:21 - loss: 0.4239 - accuracy: 0.9233 - mean_iou: 0.5179WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 45/78 [================>.............] - ETA: 23:39 - loss: 0.4230 - accuracy: 0.9236 - mean_iou: 0.5205WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 46/78 [================>.............] - ETA: 22:56 - loss: 0.4228 - accuracy: 0.9238 - mean_iou: 0.5223WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 47/78 [=================>............] - ETA: 22:13 - loss: 0.4236 - accuracy: 0.9236 - mean_iou: 0.5204WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 48/78 [=================>............] - ETA: 21:31 - loss: 0.4229 - accuracy: 0.9237 - mean_iou: 0.5223WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 49/78 [=================>............] - ETA: 20:48 - loss: 0.4232 - accuracy: 0.9235 - mean_iou: 0.5239WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 50/78 [==================>...........] - ETA: 20:06 - loss: 0.4218 - accuracy: 0.9238 - mean_iou: 0.5261WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 51/78 [==================>...........] - ETA: 19:23 - loss: 0.4211 - accuracy: 0.9238 - mean_iou: 0.5278WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 52/78 [===================>..........] - ETA: 18:40 - loss: 0.4218 - accuracy: 0.9236 - mean_iou: 0.5260WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 53/78 [===================>..........] - ETA: 17:57 - loss: 0.4220 - accuracy: 0.9233 - mean_iou: 0.5250WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 54/78 [===================>..........] - ETA: 17:15 - loss: 0.4215 - accuracy: 0.9233 - mean_iou: 0.5259WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 55/78 [====================>.........] - ETA: 16:32 - loss: 0.4214 - accuracy: 0.9232 - mean_iou: 0.5251WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 56/78 [====================>.........] - ETA: 15:49 - loss: 0.4214 - accuracy: 0.9231 - mean_iou: 0.5244WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 57/78 [====================>.........] - ETA: 15:06 - loss: 0.4211 - accuracy: 0.9230 - mean_iou: 0.5249WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 58/78 [=====================>........] - ETA: 14:23 - loss: 0.4204 - accuracy: 0.9231 - mean_iou: 0.5269WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 59/78 [=====================>........] - ETA: 13:40 - loss: 0.4200 - accuracy: 0.9232 - mean_iou: 0.5271WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 60/78 [======================>.......] - ETA: 12:57 - loss: 0.4194 - accuracy: 0.9234 - mean_iou: 0.5268WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 61/78 [======================>.......] - ETA: 12:14 - loss: 0.4189 - accuracy: 0.9235 - mean_iou: 0.5264WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 62/78 [======================>.......] - ETA: 11:31 - loss: 0.4183 - accuracy: 0.9236 - mean_iou: 0.5263WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 63/78 [=======================>......] - ETA: 10:48 - loss: 0.4184 - accuracy: 0.9234 - mean_iou: 0.5269WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 64/78 [=======================>......] - ETA: 10:05 - loss: 0.4178 - accuracy: 0.9238 - mean_iou: 0.5278WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 65/78 [========================>.....] - ETA: 9:22 - loss: 0.4185 - accuracy: 0.9237 - mean_iou: 0.5270 WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 66/78 [========================>.....] - ETA: 8:39 - loss: 0.4182 - accuracy: 0.9239 - mean_iou: 0.5268WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 67/78 [========================>.....] - ETA: 7:56 - loss: 0.4176 - accuracy: 0.9241 - mean_iou: 0.5288WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 68/78 [=========================>....] - ETA: 7:12 - loss: 0.4182 - accuracy: 0.9240 - mean_iou: 0.5300WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 69/78 [=========================>....] - ETA: 6:29 - loss: 0.4170 - accuracy: 0.9245 - mean_iou: 0.5329WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 70/78 [=========================>....] - ETA: 5:45 - loss: 0.4159 - accuracy: 0.9249 - mean_iou: 0.5355WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 71/78 [==========================>...] - ETA: 5:02 - loss: 0.4154 - accuracy: 0.9250 - mean_iou: 0.5363WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 72/78 [==========================>...] - ETA: 4:18 - loss: 0.4150 - accuracy: 0.9253 - mean_iou: 0.5370WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 73/78 [===========================>..] - ETA: 3:35 - loss: 0.4144 - accuracy: 0.9256 - mean_iou: 0.5364WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 74/78 [===========================>..] - ETA: 2:52 - loss: 0.4133 - accuracy: 0.9259 - mean_iou: 0.5386WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 75/78 [===========================>..] - ETA: 2:09 - loss: 0.4131 - accuracy: 0.9262 - mean_iou: 0.5397WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 76/78 [============================>.] - ETA: 1:26 - loss: 0.4123 - accuracy: 0.9264 - mean_iou: 0.5403WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 77/78 [============================>.] - ETA: 42s - loss: 0.4130 - accuracy: 0.9262 - mean_iou: 0.5408 WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. WARNING:tensorflow:Can save best model only with val_accuracy available, skipping. 78/78 [==============================] - ETA: 0s - loss: 0.4127 - accuracy: 0.9264 - mean_iou: 0.5418 WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. 78/78 [==============================] - 3629s 47s/step - loss: 0.4127 - accuracy: 0.9264 - mean_iou: 0.5418 - val_loss: 0.4492 - val_accuracy: 0.9331 - val_mean_iou: 0.5890 - lr: 9.0451e-04
model.load_weights("4000_checkpoint.hdf5")
from tensorflow.keras.callbacks import ModelCheckpoint
checkpoint_file="4000_checkpoint_1.hdf5"
checkpoint = ModelCheckpoint(checkpoint_file,monitor='accuracy',verbose=1,save_best_only=True,mode='max',save_freq=1)
history = model.fit_generator(train_gen, validation_data=valid_gen, callbacks=[learning_rate,checkpoint], epochs=6, workers=8, use_multiprocessing=True)
WARNING:tensorflow:From <ipython-input-50-af21aa0f54e4>:1: Model.fit_generator (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version. Instructions for updating: Please use Model.fit, which supports generators. Epoch 1/6 WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. Epoch 00001: accuracy improved from -inf to 0.94644, saving model to 4000_checkpoint_1.hdf5 1/78 [..............................] - ETA: 0s - loss: 0.3660 - accuracy: 0.9464 - mean_iou: 0.5925 Epoch 00001: accuracy did not improve from 0.94644 2/78 [..............................] - ETA: 32:15 - loss: 0.4199 - accuracy: 0.9385 - mean_iou: 0.5853 Epoch 00001: accuracy did not improve from 0.94644 3/78 [>.............................] - ETA: 41:55 - loss: 0.4499 - accuracy: 0.9325 - mean_iou: 0.5496 Epoch 00001: accuracy did not improve from 0.94644 4/78 [>.............................] - ETA: 46:25 - loss: 0.4523 - accuracy: 0.9316 - mean_iou: 0.5151 Epoch 00001: accuracy did not improve from 0.94644 5/78 [>.............................] - ETA: 48:44 - loss: 0.4461 - accuracy: 0.9328 - mean_iou: 0.4989 Epoch 00001: accuracy did not improve from 0.94644 6/78 [=>............................] - ETA: 50:01 - loss: 0.4487 - accuracy: 0.9305 - mean_iou: 0.4751 Epoch 00001: accuracy did not improve from 0.94644 7/78 [=>............................] - ETA: 50:43 - loss: 0.4398 - accuracy: 0.9321 - mean_iou: 0.4881 Epoch 00001: accuracy did not improve from 0.94644 8/78 [==>...........................] - ETA: 51:14 - loss: 0.4404 - accuracy: 0.9307 - mean_iou: 0.4896 Epoch 00001: accuracy did not improve from 0.94644 9/78 [==>...........................] - ETA: 51:16 - loss: 0.4394 - accuracy: 0.9310 - mean_iou: 0.4911 Epoch 00001: accuracy did not improve from 0.94644 10/78 [==>...........................] - ETA: 51:07 - loss: 0.4404 - accuracy: 0.9287 - mean_iou: 0.4955 Epoch 00001: accuracy did not improve from 0.94644 11/78 [===>..........................] - ETA: 50:55 - loss: 0.4444 - accuracy: 0.9261 - mean_iou: 0.4849 Epoch 00001: accuracy did not improve from 0.94644 12/78 [===>..........................] - ETA: 50:36 - loss: 0.4463 - accuracy: 0.9251 - mean_iou: 0.4706 Epoch 00001: accuracy did not improve from 0.94644 13/78 [====>.........................] - ETA: 50:12 - loss: 0.4492 - accuracy: 0.9233 - mean_iou: 0.4605 Epoch 00001: accuracy did not improve from 0.94644 14/78 [====>.........................] - ETA: 49:44 - loss: 0.4476 - accuracy: 0.9236 - mean_iou: 0.4656 Epoch 00001: accuracy did not improve from 0.94644 15/78 [====>.........................] - ETA: 49:11 - loss: 0.4466 - accuracy: 0.9236 - mean_iou: 0.4614 Epoch 00001: accuracy did not improve from 0.94644 16/78 [=====>........................] - ETA: 48:38 - loss: 0.4407 - accuracy: 0.9250 - mean_iou: 0.4683 Epoch 00001: accuracy did not improve from 0.94644 17/78 [=====>........................] - ETA: 48:01 - loss: 0.4380 - accuracy: 0.9258 - mean_iou: 0.4768 Epoch 00001: accuracy did not improve from 0.94644 18/78 [=====>........................] - ETA: 47:24 - loss: 0.4350 - accuracy: 0.9261 - mean_iou: 0.4837 Epoch 00001: accuracy did not improve from 0.94644 19/78 [======>.......................] - ETA: 46:45 - loss: 0.4353 - accuracy: 0.9260 - mean_iou: 0.4837 Epoch 00001: accuracy did not improve from 0.94644 20/78 [======>.......................] - ETA: 46:08 - loss: 0.4316 - accuracy: 0.9268 - mean_iou: 0.4847 Epoch 00001: accuracy did not improve from 0.94644 21/78 [=======>......................] - ETA: 45:27 - loss: 0.4305 - accuracy: 0.9271 - mean_iou: 0.4883 Epoch 00001: accuracy did not improve from 0.94644 22/78 [=======>......................] - ETA: 44:45 - loss: 0.4291 - accuracy: 0.9276 - mean_iou: 0.4910 Epoch 00001: accuracy did not improve from 0.94644 23/78 [=======>......................] - ETA: 44:02 - loss: 0.4274 - accuracy: 0.9281 - mean_iou: 0.4962 Epoch 00001: accuracy did not improve from 0.94644 24/78 [========>.....................] - ETA: 43:19 - loss: 0.4263 - accuracy: 0.9282 - mean_iou: 0.4994 Epoch 00001: accuracy did not improve from 0.94644 25/78 [========>.....................] - ETA: 42:36 - loss: 0.4289 - accuracy: 0.9275 - mean_iou: 0.5004 Epoch 00001: accuracy did not improve from 0.94644 26/78 [=========>....................] - ETA: 41:51 - loss: 0.4273 - accuracy: 0.9277 - mean_iou: 0.5051 Epoch 00001: accuracy did not improve from 0.94644 27/78 [=========>....................] - ETA: 41:07 - loss: 0.4274 - accuracy: 0.9279 - mean_iou: 0.5091 Epoch 00001: accuracy did not improve from 0.94644 28/78 [=========>....................] - ETA: 40:22 - loss: 0.4284 - accuracy: 0.9274 - mean_iou: 0.5109 Epoch 00001: accuracy did not improve from 0.94644 29/78 [==========>...................] - ETA: 39:36 - loss: 0.4276 - accuracy: 0.9276 - mean_iou: 0.5112 Epoch 00001: accuracy did not improve from 0.94644 30/78 [==========>...................] - ETA: 38:51 - loss: 0.4268 - accuracy: 0.9278 - mean_iou: 0.5115 Epoch 00001: accuracy did not improve from 0.94644 31/78 [==========>...................] - ETA: 38:05 - loss: 0.4267 - accuracy: 0.9280 - mean_iou: 0.5137 Epoch 00001: accuracy did not improve from 0.94644 32/78 [===========>..................] - ETA: 37:18 - loss: 0.4263 - accuracy: 0.9283 - mean_iou: 0.5135 Epoch 00001: accuracy did not improve from 0.94644 33/78 [===========>..................] - ETA: 36:32 - loss: 0.4260 - accuracy: 0.9285 - mean_iou: 0.5133 Epoch 00001: accuracy did not improve from 0.94644 34/78 [============>.................] - ETA: 35:46 - loss: 0.4256 - accuracy: 0.9284 - mean_iou: 0.5128 Epoch 00001: accuracy did not improve from 0.94644 35/78 [============>.................] - ETA: 35:00 - loss: 0.4264 - accuracy: 0.9282 - mean_iou: 0.5133 Epoch 00001: accuracy did not improve from 0.94644 36/78 [============>.................] - ETA: 34:14 - loss: 0.4255 - accuracy: 0.9283 - mean_iou: 0.5156 Epoch 00001: accuracy did not improve from 0.94644 37/78 [=============>................] - ETA: 33:28 - loss: 0.4244 - accuracy: 0.9286 - mean_iou: 0.5140 Epoch 00001: accuracy did not improve from 0.94644 38/78 [=============>................] - ETA: 32:41 - loss: 0.4250 - accuracy: 0.9284 - mean_iou: 0.5150 Epoch 00001: accuracy did not improve from 0.94644 39/78 [==============>...............] - ETA: 31:53 - loss: 0.4254 - accuracy: 0.9280 - mean_iou: 0.5169 Epoch 00001: accuracy did not improve from 0.94644 40/78 [==============>...............] - ETA: 31:06 - loss: 0.4250 - accuracy: 0.9278 - mean_iou: 0.5168 Epoch 00001: accuracy did not improve from 0.94644 41/78 [==============>...............] - ETA: 30:18 - loss: 0.4244 - accuracy: 0.9280 - mean_iou: 0.5165 Epoch 00001: accuracy did not improve from 0.94644 42/78 [===============>..............] - ETA: 29:30 - loss: 0.4246 - accuracy: 0.9278 - mean_iou: 0.5127 Epoch 00001: accuracy did not improve from 0.94644 43/78 [===============>..............] - ETA: 28:42 - loss: 0.4247 - accuracy: 0.9276 - mean_iou: 0.5119 Epoch 00001: accuracy did not improve from 0.94644 44/78 [===============>..............] - ETA: 27:54 - loss: 0.4243 - accuracy: 0.9275 - mean_iou: 0.5115 Epoch 00001: accuracy did not improve from 0.94644 45/78 [================>.............] - ETA: 27:05 - loss: 0.4242 - accuracy: 0.9277 - mean_iou: 0.5125 Epoch 00001: accuracy did not improve from 0.94644 46/78 [================>.............] - ETA: 26:17 - loss: 0.4240 - accuracy: 0.9277 - mean_iou: 0.5131 Epoch 00001: accuracy did not improve from 0.94644 47/78 [=================>............] - ETA: 25:29 - loss: 0.4242 - accuracy: 0.9280 - mean_iou: 0.5129 Epoch 00001: accuracy did not improve from 0.94644 48/78 [=================>............] - ETA: 24:41 - loss: 0.4238 - accuracy: 0.9283 - mean_iou: 0.5163 Epoch 00001: accuracy did not improve from 0.94644 49/78 [=================>............] - ETA: 23:52 - loss: 0.4241 - accuracy: 0.9284 - mean_iou: 0.5180 Epoch 00001: accuracy did not improve from 0.94644 50/78 [==================>...........] - ETA: 23:03 - loss: 0.4246 - accuracy: 0.9284 - mean_iou: 0.5166 Epoch 00001: accuracy did not improve from 0.94644 51/78 [==================>...........] - ETA: 22:14 - loss: 0.4238 - accuracy: 0.9287 - mean_iou: 0.5197 Epoch 00001: accuracy did not improve from 0.94644 52/78 [===================>..........] - ETA: 21:25 - loss: 0.4238 - accuracy: 0.9289 - mean_iou: 0.5209 Epoch 00001: accuracy did not improve from 0.94644 53/78 [===================>..........] - ETA: 20:36 - loss: 0.4242 - accuracy: 0.9288 - mean_iou: 0.5206 Epoch 00001: accuracy did not improve from 0.94644 54/78 [===================>..........] - ETA: 19:47 - loss: 0.4243 - accuracy: 0.9287 - mean_iou: 0.5220 Epoch 00001: accuracy did not improve from 0.94644 55/78 [====================>.........] - ETA: 18:58 - loss: 0.4241 - accuracy: 0.9286 - mean_iou: 0.5228 Epoch 00001: accuracy did not improve from 0.94644 56/78 [====================>.........] - ETA: 18:09 - loss: 0.4240 - accuracy: 0.9287 - mean_iou: 0.5215 Epoch 00001: accuracy did not improve from 0.94644 57/78 [====================>.........] - ETA: 17:20 - loss: 0.4239 - accuracy: 0.9285 - mean_iou: 0.5206 Epoch 00001: accuracy did not improve from 0.94644 58/78 [=====================>........] - ETA: 16:31 - loss: 0.4242 - accuracy: 0.9284 - mean_iou: 0.5203 Epoch 00001: accuracy did not improve from 0.94644 59/78 [=====================>........] - ETA: 15:42 - loss: 0.4238 - accuracy: 0.9285 - mean_iou: 0.5198 Epoch 00001: accuracy did not improve from 0.94644 60/78 [======================>.......] - ETA: 14:53 - loss: 0.4246 - accuracy: 0.9282 - mean_iou: 0.5182 Epoch 00001: accuracy did not improve from 0.94644 61/78 [======================>.......] - ETA: 14:04 - loss: 0.4247 - accuracy: 0.9282 - mean_iou: 0.5166 Epoch 00001: accuracy did not improve from 0.94644 62/78 [======================>.......] - ETA: 13:15 - loss: 0.4242 - accuracy: 0.9285 - mean_iou: 0.5166 Epoch 00001: accuracy did not improve from 0.94644 63/78 [=======================>......] - ETA: 12:25 - loss: 0.4228 - accuracy: 0.9290 - mean_iou: 0.5194 Epoch 00001: accuracy did not improve from 0.94644 64/78 [=======================>......] - ETA: 11:36 - loss: 0.4231 - accuracy: 0.9288 - mean_iou: 0.5199 Epoch 00001: accuracy did not improve from 0.94644 65/78 [========================>.....] - ETA: 10:46 - loss: 0.4236 - accuracy: 0.9291 - mean_iou: 0.5202 Epoch 00001: accuracy did not improve from 0.94644 66/78 [========================>.....] - ETA: 9:57 - loss: 0.4249 - accuracy: 0.9288 - mean_iou: 0.5191 Epoch 00001: accuracy did not improve from 0.94644 67/78 [========================>.....] - ETA: 9:07 - loss: 0.4245 - accuracy: 0.9290 - mean_iou: 0.5193 Epoch 00001: accuracy did not improve from 0.94644 68/78 [=========================>....] - ETA: 8:17 - loss: 0.4241 - accuracy: 0.9291 - mean_iou: 0.5207 Epoch 00001: accuracy did not improve from 0.94644 69/78 [=========================>....] - ETA: 7:27 - loss: 0.4240 - accuracy: 0.9291 - mean_iou: 0.5205 Epoch 00001: accuracy did not improve from 0.94644 70/78 [=========================>....] - ETA: 6:37 - loss: 0.4243 - accuracy: 0.9289 - mean_iou: 0.5190 Epoch 00001: accuracy did not improve from 0.94644 71/78 [==========================>...] - ETA: 5:47 - loss: 0.4239 - accuracy: 0.9290 - mean_iou: 0.5187 Epoch 00001: accuracy did not improve from 0.94644 72/78 [==========================>...] - ETA: 4:57 - loss: 0.4230 - accuracy: 0.9291 - mean_iou: 0.5198 Epoch 00001: accuracy did not improve from 0.94644 73/78 [===========================>..] - ETA: 4:07 - loss: 0.4230 - accuracy: 0.9291 - mean_iou: 0.5199 Epoch 00001: accuracy did not improve from 0.94644 74/78 [===========================>..] - ETA: 3:18 - loss: 0.4233 - accuracy: 0.9291 - mean_iou: 0.5202 Epoch 00001: accuracy did not improve from 0.94644 75/78 [===========================>..] - ETA: 2:28 - loss: 0.4237 - accuracy: 0.9287 - mean_iou: 0.5202 Epoch 00001: accuracy did not improve from 0.94644 76/78 [============================>.] - ETA: 1:38 - loss: 0.4241 - accuracy: 0.9287 - mean_iou: 0.5207 Epoch 00001: accuracy did not improve from 0.94644 77/78 [============================>.] - ETA: 49s - loss: 0.4241 - accuracy: 0.9288 - mean_iou: 0.5220 WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. Epoch 00001: accuracy did not improve from 0.94644 78/78 [==============================] - ETA: 0s - loss: 0.4237 - accuracy: 0.9288 - mean_iou: 0.5234 WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. 78/78 [==============================] - 4143s 53s/step - loss: 0.4237 - accuracy: 0.9288 - mean_iou: 0.5234 - val_loss: 0.4347 - val_accuracy: 0.9392 - val_mean_iou: 0.5553 - lr: 0.0010 Epoch 2/6 WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. Epoch 00002: accuracy did not improve from 0.94644 1/78 [..............................] - ETA: 0s - loss: 0.3809 - accuracy: 0.9407 - mean_iou: 0.6340 Epoch 00002: accuracy did not improve from 0.94644 2/78 [..............................] - ETA: 31:52 - loss: 0.4111 - accuracy: 0.9327 - mean_iou: 0.5802 Epoch 00002: accuracy did not improve from 0.94644 3/78 [>.............................] - ETA: 41:58 - loss: 0.4265 - accuracy: 0.9267 - mean_iou: 0.5951 Epoch 00002: accuracy did not improve from 0.94644 4/78 [>.............................] - ETA: 46:32 - loss: 0.4134 - accuracy: 0.9305 - mean_iou: 0.5844 Epoch 00002: accuracy did not improve from 0.94644 5/78 [>.............................] - ETA: 49:05 - loss: 0.4139 - accuracy: 0.9314 - mean_iou: 0.6041 Epoch 00002: accuracy did not improve from 0.94644 6/78 [=>............................] - ETA: 50:22 - loss: 0.4165 - accuracy: 0.9314 - mean_iou: 0.5777 Epoch 00002: accuracy did not improve from 0.94644 7/78 [=>............................] - ETA: 51:05 - loss: 0.4193 - accuracy: 0.9297 - mean_iou: 0.5643 Epoch 00002: accuracy did not improve from 0.94644 8/78 [==>...........................] - ETA: 51:25 - loss: 0.4182 - accuracy: 0.9286 - mean_iou: 0.5591 Epoch 00002: accuracy did not improve from 0.94644 9/78 [==>...........................] - ETA: 51:32 - loss: 0.4182 - accuracy: 0.9289 - mean_iou: 0.5525 Epoch 00002: accuracy did not improve from 0.94644 10/78 [==>...........................] - ETA: 51:24 - loss: 0.4201 - accuracy: 0.9298 - mean_iou: 0.5448 Epoch 00002: accuracy did not improve from 0.94644 11/78 [===>..........................] - ETA: 51:14 - loss: 0.4205 - accuracy: 0.9306 - mean_iou: 0.5427 Epoch 00002: accuracy did not improve from 0.94644 12/78 [===>..........................] - ETA: 50:54 - loss: 0.4177 - accuracy: 0.9320 - mean_iou: 0.5411 Epoch 00002: accuracy did not improve from 0.94644 13/78 [====>.........................] - ETA: 50:29 - loss: 0.4163 - accuracy: 0.9326 - mean_iou: 0.5423 Epoch 00002: accuracy did not improve from 0.94644 14/78 [====>.........................] - ETA: 50:00 - loss: 0.4173 - accuracy: 0.9343 - mean_iou: 0.5503 Epoch 00002: accuracy did not improve from 0.94644 15/78 [====>.........................] - ETA: 49:27 - loss: 0.4157 - accuracy: 0.9342 - mean_iou: 0.5523 Epoch 00002: accuracy did not improve from 0.94644 16/78 [=====>........................] - ETA: 48:53 - loss: 0.4168 - accuracy: 0.9348 - mean_iou: 0.5532 Epoch 00002: accuracy did not improve from 0.94644 17/78 [=====>........................] - ETA: 48:18 - loss: 0.4138 - accuracy: 0.9355 - mean_iou: 0.5615 Epoch 00002: accuracy did not improve from 0.94644 18/78 [=====>........................] - ETA: 47:40 - loss: 0.4143 - accuracy: 0.9361 - mean_iou: 0.5638 Epoch 00002: accuracy did not improve from 0.94644 19/78 [======>.......................] - ETA: 47:00 - loss: 0.4120 - accuracy: 0.9368 - mean_iou: 0.5631 Epoch 00002: accuracy did not improve from 0.94644 20/78 [======>.......................] - ETA: 46:18 - loss: 0.4113 - accuracy: 0.9366 - mean_iou: 0.5634 Epoch 00002: accuracy did not improve from 0.94644 21/78 [=======>......................] - ETA: 45:38 - loss: 0.4087 - accuracy: 0.9368 - mean_iou: 0.5658 Epoch 00002: accuracy did not improve from 0.94644 22/78 [=======>......................] - ETA: 44:55 - loss: 0.4085 - accuracy: 0.9364 - mean_iou: 0.5722 Epoch 00002: accuracy did not improve from 0.94644 23/78 [=======>......................] - ETA: 44:13 - loss: 0.4098 - accuracy: 0.9356 - mean_iou: 0.5736 Epoch 00002: accuracy did not improve from 0.94644 24/78 [========>.....................] - ETA: 43:31 - loss: 0.4112 - accuracy: 0.9349 - mean_iou: 0.5665 Epoch 00002: accuracy did not improve from 0.94644 25/78 [========>.....................] - ETA: 42:47 - loss: 0.4117 - accuracy: 0.9348 - mean_iou: 0.5642 Epoch 00002: accuracy did not improve from 0.94644 26/78 [=========>....................] - ETA: 42:02 - loss: 0.4126 - accuracy: 0.9344 - mean_iou: 0.5625 Epoch 00002: accuracy did not improve from 0.94644 27/78 [=========>....................] - ETA: 41:17 - loss: 0.4140 - accuracy: 0.9334 - mean_iou: 0.5589 Epoch 00002: accuracy did not improve from 0.94644 28/78 [=========>....................] - ETA: 40:31 - loss: 0.4159 - accuracy: 0.9330 - mean_iou: 0.5551 Epoch 00002: accuracy did not improve from 0.94644 29/78 [==========>...................] - ETA: 39:46 - loss: 0.4163 - accuracy: 0.9329 - mean_iou: 0.5586 Epoch 00002: accuracy did not improve from 0.94644 30/78 [==========>...................] - ETA: 38:59 - loss: 0.4161 - accuracy: 0.9329 - mean_iou: 0.5583 Epoch 00002: accuracy did not improve from 0.94644 31/78 [==========>...................] - ETA: 38:12 - loss: 0.4156 - accuracy: 0.9333 - mean_iou: 0.5585 Epoch 00002: accuracy did not improve from 0.94644 32/78 [===========>..................] - ETA: 37:25 - loss: 0.4143 - accuracy: 0.9336 - mean_iou: 0.5611 Epoch 00002: accuracy did not improve from 0.94644 33/78 [===========>..................] - ETA: 36:37 - loss: 0.4148 - accuracy: 0.9335 - mean_iou: 0.5598 Epoch 00002: accuracy did not improve from 0.94644 34/78 [============>.................] - ETA: 35:49 - loss: 0.4145 - accuracy: 0.9340 - mean_iou: 0.5615 Epoch 00002: accuracy did not improve from 0.94644 35/78 [============>.................] - ETA: 35:02 - loss: 0.4149 - accuracy: 0.9337 - mean_iou: 0.5615 Epoch 00002: accuracy did not improve from 0.94644 36/78 [============>.................] - ETA: 34:15 - loss: 0.4140 - accuracy: 0.9338 - mean_iou: 0.5617 Epoch 00002: accuracy did not improve from 0.94644 37/78 [=============>................] - ETA: 33:27 - loss: 0.4128 - accuracy: 0.9343 - mean_iou: 0.5644 Epoch 00002: accuracy did not improve from 0.94644 38/78 [=============>................] - ETA: 32:39 - loss: 0.4141 - accuracy: 0.9339 - mean_iou: 0.5620 Epoch 00002: accuracy did not improve from 0.94644 39/78 [==============>...............] - ETA: 31:51 - loss: 0.4142 - accuracy: 0.9338 - mean_iou: 0.5598 Epoch 00002: accuracy did not improve from 0.94644 40/78 [==============>...............] - ETA: 31:03 - loss: 0.4141 - accuracy: 0.9337 - mean_iou: 0.5605 Epoch 00002: accuracy did not improve from 0.94644 41/78 [==============>...............] - ETA: 30:15 - loss: 0.4133 - accuracy: 0.9338 - mean_iou: 0.5604 Epoch 00002: accuracy did not improve from 0.94644 42/78 [===============>..............] - ETA: 29:27 - loss: 0.4130 - accuracy: 0.9337 - mean_iou: 0.5591 Epoch 00002: accuracy did not improve from 0.94644 43/78 [===============>..............] - ETA: 28:39 - loss: 0.4119 - accuracy: 0.9336 - mean_iou: 0.5624 Epoch 00002: accuracy did not improve from 0.94644 44/78 [===============>..............] - ETA: 27:50 - loss: 0.4107 - accuracy: 0.9337 - mean_iou: 0.5636 Epoch 00002: accuracy did not improve from 0.94644 45/78 [================>.............] - ETA: 27:02 - loss: 0.4103 - accuracy: 0.9337 - mean_iou: 0.5650 Epoch 00002: accuracy did not improve from 0.94644 46/78 [================>.............] - ETA: 26:13 - loss: 0.4110 - accuracy: 0.9335 - mean_iou: 0.5632 Epoch 00002: accuracy did not improve from 0.94644 47/78 [=================>............] - ETA: 25:24 - loss: 0.4107 - accuracy: 0.9334 - mean_iou: 0.5623 Epoch 00002: accuracy did not improve from 0.94644 48/78 [=================>............] - ETA: 24:36 - loss: 0.4113 - accuracy: 0.9334 - mean_iou: 0.5620 Epoch 00002: accuracy did not improve from 0.94644 49/78 [=================>............] - ETA: 23:47 - loss: 0.4104 - accuracy: 0.9334 - mean_iou: 0.5634 Epoch 00002: accuracy did not improve from 0.94644 50/78 [==================>...........] - ETA: 22:59 - loss: 0.4100 - accuracy: 0.9334 - mean_iou: 0.5647 Epoch 00002: accuracy did not improve from 0.94644 51/78 [==================>...........] - ETA: 22:10 - loss: 0.4108 - accuracy: 0.9329 - mean_iou: 0.5627 Epoch 00002: accuracy did not improve from 0.94644 52/78 [===================>..........] - ETA: 21:21 - loss: 0.4107 - accuracy: 0.9328 - mean_iou: 0.5617 Epoch 00002: accuracy did not improve from 0.94644 53/78 [===================>..........] - ETA: 20:32 - loss: 0.4103 - accuracy: 0.9327 - mean_iou: 0.5631 Epoch 00002: accuracy did not improve from 0.94644 54/78 [===================>..........] - ETA: 19:43 - loss: 0.4105 - accuracy: 0.9324 - mean_iou: 0.5624 Epoch 00002: accuracy did not improve from 0.94644 55/78 [====================>.........] - ETA: 18:54 - loss: 0.4114 - accuracy: 0.9322 - mean_iou: 0.5612 Epoch 00002: accuracy did not improve from 0.94644 56/78 [====================>.........] - ETA: 18:05 - loss: 0.4114 - accuracy: 0.9319 - mean_iou: 0.5592 Epoch 00002: accuracy did not improve from 0.94644 57/78 [====================>.........] - ETA: 17:16 - loss: 0.4116 - accuracy: 0.9316 - mean_iou: 0.5572 Epoch 00002: accuracy did not improve from 0.94644 58/78 [=====================>........] - ETA: 16:27 - loss: 0.4121 - accuracy: 0.9312 - mean_iou: 0.5556 Epoch 00002: accuracy did not improve from 0.94644 59/78 [=====================>........] - ETA: 15:38 - loss: 0.4113 - accuracy: 0.9314 - mean_iou: 0.5543 Epoch 00002: accuracy did not improve from 0.94644 60/78 [======================>.......] - ETA: 14:49 - loss: 0.4110 - accuracy: 0.9315 - mean_iou: 0.5531 Epoch 00002: accuracy did not improve from 0.94644 61/78 [======================>.......] - ETA: 14:00 - loss: 0.4105 - accuracy: 0.9316 - mean_iou: 0.5544 Epoch 00002: accuracy did not improve from 0.94644 62/78 [======================>.......] - ETA: 13:11 - loss: 0.4102 - accuracy: 0.9317 - mean_iou: 0.5538 Epoch 00002: accuracy did not improve from 0.94644 63/78 [=======================>......] - ETA: 12:21 - loss: 0.4104 - accuracy: 0.9316 - mean_iou: 0.5542 Epoch 00002: accuracy did not improve from 0.94644 64/78 [=======================>......] - ETA: 11:32 - loss: 0.4093 - accuracy: 0.9319 - mean_iou: 0.5551 Epoch 00002: accuracy did not improve from 0.94644 65/78 [========================>.....] - ETA: 10:43 - loss: 0.4098 - accuracy: 0.9319 - mean_iou: 0.5533 Epoch 00002: accuracy did not improve from 0.94644 66/78 [========================>.....] - ETA: 9:53 - loss: 0.4094 - accuracy: 0.9320 - mean_iou: 0.5546 Epoch 00002: accuracy did not improve from 0.94644 67/78 [========================>.....] - ETA: 9:04 - loss: 0.4101 - accuracy: 0.9318 - mean_iou: 0.5545 Epoch 00002: accuracy did not improve from 0.94644 68/78 [=========================>....] - ETA: 8:14 - loss: 0.4099 - accuracy: 0.9317 - mean_iou: 0.5542 Epoch 00002: accuracy did not improve from 0.94644 69/78 [=========================>....] - ETA: 7:24 - loss: 0.4101 - accuracy: 0.9318 - mean_iou: 0.5528 Epoch 00002: accuracy did not improve from 0.94644 70/78 [=========================>....] - ETA: 6:34 - loss: 0.4102 - accuracy: 0.9317 - mean_iou: 0.5511 Epoch 00002: accuracy did not improve from 0.94644 71/78 [==========================>...] - ETA: 5:45 - loss: 0.4105 - accuracy: 0.9317 - mean_iou: 0.5506 Epoch 00002: accuracy did not improve from 0.94644 72/78 [==========================>...] - ETA: 4:55 - loss: 0.4107 - accuracy: 0.9318 - mean_iou: 0.5511 Epoch 00002: accuracy did not improve from 0.94644 73/78 [===========================>..] - ETA: 4:06 - loss: 0.4114 - accuracy: 0.9318 - mean_iou: 0.5493 Epoch 00002: accuracy did not improve from 0.94644 74/78 [===========================>..] - ETA: 3:16 - loss: 0.4113 - accuracy: 0.9319 - mean_iou: 0.5514 Epoch 00002: accuracy did not improve from 0.94644 75/78 [===========================>..] - ETA: 2:27 - loss: 0.4110 - accuracy: 0.9321 - mean_iou: 0.5522 Epoch 00002: accuracy did not improve from 0.94644 76/78 [============================>.] - ETA: 1:38 - loss: 0.4114 - accuracy: 0.9320 - mean_iou: 0.5508 Epoch 00002: accuracy did not improve from 0.94644 77/78 [============================>.] - ETA: 49s - loss: 0.4110 - accuracy: 0.9323 - mean_iou: 0.5515 WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. Epoch 00002: accuracy did not improve from 0.94644 78/78 [==============================] - ETA: 0s - loss: 0.4117 - accuracy: 0.9320 - mean_iou: 0.5512 WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. 78/78 [==============================] - 4111s 53s/step - loss: 0.4117 - accuracy: 0.9320 - mean_iou: 0.5512 - val_loss: 0.4324 - val_accuracy: 0.9253 - val_mean_iou: 0.4669 - lr: 9.9606e-04 Epoch 3/6 WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. Epoch 00003: accuracy did not improve from 0.94644 1/78 [..............................] - ETA: 0s - loss: 0.3943 - accuracy: 0.9429 - mean_iou: 0.5751 Epoch 00003: accuracy did not improve from 0.94644 2/78 [..............................] - ETA: 31:48 - loss: 0.4072 - accuracy: 0.9425 - mean_iou: 0.5413 Epoch 00003: accuracy improved from 0.94644 to 0.94873, saving model to 4000_checkpoint_1.hdf5 3/78 [>.............................] - ETA: 42:41 - loss: 0.3889 - accuracy: 0.9487 - mean_iou: 0.5784 Epoch 00003: accuracy did not improve from 0.94873 4/78 [>.............................] - ETA: 47:12 - loss: 0.3966 - accuracy: 0.9465 - mean_iou: 0.5681 Epoch 00003: accuracy did not improve from 0.94873 5/78 [>.............................] - ETA: 49:27 - loss: 0.4002 - accuracy: 0.9446 - mean_iou: 0.5757 Epoch 00003: accuracy did not improve from 0.94873 6/78 [=>............................] - ETA: 50:39 - loss: 0.4052 - accuracy: 0.9460 - mean_iou: 0.5821 Epoch 00003: accuracy did not improve from 0.94873 7/78 [=>............................] - ETA: 51:15 - loss: 0.3991 - accuracy: 0.9462 - mean_iou: 0.5858 Epoch 00003: accuracy did not improve from 0.94873 8/78 [==>...........................] - ETA: 51:31 - loss: 0.3915 - accuracy: 0.9473 - mean_iou: 0.5896 Epoch 00003: accuracy did not improve from 0.94873 9/78 [==>...........................] - ETA: 51:41 - loss: 0.3899 - accuracy: 0.9468 - mean_iou: 0.5961 Epoch 00003: accuracy did not improve from 0.94873 10/78 [==>...........................] - ETA: 51:34 - loss: 0.3907 - accuracy: 0.9453 - mean_iou: 0.5937 Epoch 00003: accuracy did not improve from 0.94873 11/78 [===>..........................] - ETA: 51:19 - loss: 0.3919 - accuracy: 0.9435 - mean_iou: 0.5924 Epoch 00003: accuracy did not improve from 0.94873 12/78 [===>..........................] - ETA: 51:01 - loss: 0.3922 - accuracy: 0.9419 - mean_iou: 0.5909 Epoch 00003: accuracy did not improve from 0.94873 13/78 [====>.........................] - ETA: 50:37 - loss: 0.3920 - accuracy: 0.9410 - mean_iou: 0.5877 Epoch 00003: accuracy did not improve from 0.94873 14/78 [====>.........................] - ETA: 50:07 - loss: 0.3950 - accuracy: 0.9391 - mean_iou: 0.5780 Epoch 00003: accuracy did not improve from 0.94873 15/78 [====>.........................] - ETA: 49:34 - loss: 0.3960 - accuracy: 0.9379 - mean_iou: 0.5716 Epoch 00003: accuracy did not improve from 0.94873 16/78 [=====>........................] - ETA: 49:02 - loss: 0.3966 - accuracy: 0.9367 - mean_iou: 0.5733 Epoch 00003: accuracy did not improve from 0.94873 17/78 [=====>........................] - ETA: 48:26 - loss: 0.3954 - accuracy: 0.9359 - mean_iou: 0.5700 Epoch 00003: accuracy did not improve from 0.94873 18/78 [=====>........................] - ETA: 47:48 - loss: 0.3941 - accuracy: 0.9355 - mean_iou: 0.5703 Epoch 00003: accuracy did not improve from 0.94873 19/78 [======>.......................] - ETA: 47:09 - loss: 0.3917 - accuracy: 0.9356 - mean_iou: 0.5742 Epoch 00003: accuracy did not improve from 0.94873 20/78 [======>.......................] - ETA: 46:29 - loss: 0.3906 - accuracy: 0.9356 - mean_iou: 0.5780 Epoch 00003: accuracy did not improve from 0.94873 21/78 [=======>......................] - ETA: 45:47 - loss: 0.3899 - accuracy: 0.9356 - mean_iou: 0.5789 Epoch 00003: accuracy did not improve from 0.94873 22/78 [=======>......................] - ETA: 45:05 - loss: 0.3886 - accuracy: 0.9354 - mean_iou: 0.5850 Epoch 00003: accuracy did not improve from 0.94873 23/78 [=======>......................] - ETA: 44:22 - loss: 0.3879 - accuracy: 0.9355 - mean_iou: 0.5848 Epoch 00003: accuracy did not improve from 0.94873 24/78 [========>.....................] - ETA: 43:38 - loss: 0.3882 - accuracy: 0.9358 - mean_iou: 0.5859 Epoch 00003: accuracy did not improve from 0.94873 25/78 [========>.....................] - ETA: 42:55 - loss: 0.3876 - accuracy: 0.9358 - mean_iou: 0.5844 Epoch 00003: accuracy did not improve from 0.94873 26/78 [=========>....................] - ETA: 42:10 - loss: 0.3885 - accuracy: 0.9357 - mean_iou: 0.5839 Epoch 00003: accuracy did not improve from 0.94873 27/78 [=========>....................] - ETA: 41:25 - loss: 0.3886 - accuracy: 0.9355 - mean_iou: 0.5811 Epoch 00003: accuracy did not improve from 0.94873 28/78 [=========>....................] - ETA: 40:40 - loss: 0.3887 - accuracy: 0.9358 - mean_iou: 0.5812 Epoch 00003: accuracy did not improve from 0.94873 29/78 [==========>...................] - ETA: 39:54 - loss: 0.3901 - accuracy: 0.9361 - mean_iou: 0.5797 Epoch 00003: accuracy did not improve from 0.94873 30/78 [==========>...................] - ETA: 39:08 - loss: 0.3896 - accuracy: 0.9364 - mean_iou: 0.5797 Epoch 00003: accuracy did not improve from 0.94873 31/78 [==========>...................] - ETA: 38:22 - loss: 0.3887 - accuracy: 0.9367 - mean_iou: 0.5808 Epoch 00003: accuracy did not improve from 0.94873 32/78 [===========>..................] - ETA: 37:35 - loss: 0.3888 - accuracy: 0.9366 - mean_iou: 0.5832 Epoch 00003: accuracy did not improve from 0.94873 33/78 [===========>..................] - ETA: 36:48 - loss: 0.3881 - accuracy: 0.9367 - mean_iou: 0.5846 Epoch 00003: accuracy did not improve from 0.94873 34/78 [============>.................] - ETA: 36:02 - loss: 0.3873 - accuracy: 0.9371 - mean_iou: 0.5857 Epoch 00003: accuracy did not improve from 0.94873 35/78 [============>.................] - ETA: 35:15 - loss: 0.3890 - accuracy: 0.9373 - mean_iou: 0.5851 Epoch 00003: accuracy did not improve from 0.94873 36/78 [============>.................] - ETA: 34:27 - loss: 0.3876 - accuracy: 0.9376 - mean_iou: 0.5875 Epoch 00003: accuracy did not improve from 0.94873 37/78 [=============>................] - ETA: 33:40 - loss: 0.3881 - accuracy: 0.9374 - mean_iou: 0.5871 Epoch 00003: accuracy did not improve from 0.94873 38/78 [=============>................] - ETA: 32:53 - loss: 0.3885 - accuracy: 0.9373 - mean_iou: 0.5857 Epoch 00003: accuracy did not improve from 0.94873 39/78 [==============>...............] - ETA: 32:05 - loss: 0.3898 - accuracy: 0.9371 - mean_iou: 0.5843 Epoch 00003: accuracy did not improve from 0.94873 40/78 [==============>...............] - ETA: 31:16 - loss: 0.3895 - accuracy: 0.9372 - mean_iou: 0.5837 Epoch 00003: accuracy did not improve from 0.94873 41/78 [==============>...............] - ETA: 30:28 - loss: 0.3891 - accuracy: 0.9372 - mean_iou: 0.5849 Epoch 00003: accuracy did not improve from 0.94873 42/78 [===============>..............] - ETA: 29:40 - loss: 0.3905 - accuracy: 0.9368 - mean_iou: 0.5834 Epoch 00003: accuracy did not improve from 0.94873 43/78 [===============>..............] - ETA: 28:51 - loss: 0.3902 - accuracy: 0.9367 - mean_iou: 0.5825 Epoch 00003: accuracy did not improve from 0.94873 44/78 [===============>..............] - ETA: 28:02 - loss: 0.3902 - accuracy: 0.9368 - mean_iou: 0.5845 Epoch 00003: accuracy did not improve from 0.94873 45/78 [================>.............] - ETA: 27:13 - loss: 0.3894 - accuracy: 0.9369 - mean_iou: 0.5829 Epoch 00003: accuracy did not improve from 0.94873 46/78 [================>.............] - ETA: 26:24 - loss: 0.3890 - accuracy: 0.9370 - mean_iou: 0.5815 Epoch 00003: accuracy did not improve from 0.94873 47/78 [=================>............] - ETA: 25:36 - loss: 0.3894 - accuracy: 0.9367 - mean_iou: 0.5795 Epoch 00003: accuracy did not improve from 0.94873 48/78 [=================>............] - ETA: 24:47 - loss: 0.3896 - accuracy: 0.9367 - mean_iou: 0.5804 Epoch 00003: accuracy did not improve from 0.94873 49/78 [=================>............] - ETA: 23:58 - loss: 0.3890 - accuracy: 0.9369 - mean_iou: 0.5805 Epoch 00003: accuracy did not improve from 0.94873 50/78 [==================>...........] - ETA: 23:09 - loss: 0.3911 - accuracy: 0.9366 - mean_iou: 0.5771 Epoch 00003: accuracy did not improve from 0.94873 51/78 [==================>...........] - ETA: 22:20 - loss: 0.3907 - accuracy: 0.9365 - mean_iou: 0.5786 Epoch 00003: accuracy did not improve from 0.94873 52/78 [===================>..........] - ETA: 21:31 - loss: 0.3906 - accuracy: 0.9365 - mean_iou: 0.5798 Epoch 00003: accuracy did not improve from 0.94873 53/78 [===================>..........] - ETA: 20:41 - loss: 0.3898 - accuracy: 0.9367 - mean_iou: 0.5803 Epoch 00003: accuracy did not improve from 0.94873 54/78 [===================>..........] - ETA: 19:52 - loss: 0.3901 - accuracy: 0.9369 - mean_iou: 0.5819 Epoch 00003: accuracy did not improve from 0.94873 55/78 [====================>.........] - ETA: 19:03 - loss: 0.3892 - accuracy: 0.9371 - mean_iou: 0.5840 Epoch 00003: accuracy did not improve from 0.94873 56/78 [====================>.........] - ETA: 18:13 - loss: 0.3892 - accuracy: 0.9369 - mean_iou: 0.5840 Epoch 00003: accuracy did not improve from 0.94873 57/78 [====================>.........] - ETA: 17:24 - loss: 0.3885 - accuracy: 0.9370 - mean_iou: 0.5856 Epoch 00003: accuracy did not improve from 0.94873 58/78 [=====================>........] - ETA: 16:35 - loss: 0.3894 - accuracy: 0.9368 - mean_iou: 0.5847 Epoch 00003: accuracy did not improve from 0.94873 59/78 [=====================>........] - ETA: 15:45 - loss: 0.3885 - accuracy: 0.9370 - mean_iou: 0.5871 Epoch 00003: accuracy did not improve from 0.94873 60/78 [======================>.......] - ETA: 14:56 - loss: 0.3893 - accuracy: 0.9368 - mean_iou: 0.5857 Epoch 00003: accuracy did not improve from 0.94873 61/78 [======================>.......] - ETA: 14:06 - loss: 0.3893 - accuracy: 0.9367 - mean_iou: 0.5856 Epoch 00003: accuracy did not improve from 0.94873 62/78 [======================>.......] - ETA: 13:17 - loss: 0.3894 - accuracy: 0.9366 - mean_iou: 0.5847 Epoch 00003: accuracy did not improve from 0.94873 63/78 [=======================>......] - ETA: 12:27 - loss: 0.3896 - accuracy: 0.9366 - mean_iou: 0.5836 Epoch 00003: accuracy did not improve from 0.94873 64/78 [=======================>......] - ETA: 11:37 - loss: 0.3894 - accuracy: 0.9366 - mean_iou: 0.5836 Epoch 00003: accuracy did not improve from 0.94873 65/78 [========================>.....] - ETA: 10:48 - loss: 0.3891 - accuracy: 0.9366 - mean_iou: 0.5844 Epoch 00003: accuracy did not improve from 0.94873 66/78 [========================>.....] - ETA: 9:58 - loss: 0.3903 - accuracy: 0.9361 - mean_iou: 0.5834 Epoch 00003: accuracy did not improve from 0.94873 67/78 [========================>.....] - ETA: 9:08 - loss: 0.3895 - accuracy: 0.9363 - mean_iou: 0.5842 Epoch 00003: accuracy did not improve from 0.94873 68/78 [=========================>....] - ETA: 8:18 - loss: 0.3891 - accuracy: 0.9364 - mean_iou: 0.5842 Epoch 00003: accuracy did not improve from 0.94873 69/78 [=========================>....] - ETA: 7:28 - loss: 0.3889 - accuracy: 0.9365 - mean_iou: 0.5835 Epoch 00003: accuracy did not improve from 0.94873 70/78 [=========================>....] - ETA: 6:37 - loss: 0.3900 - accuracy: 0.9364 - mean_iou: 0.5816 Epoch 00003: accuracy did not improve from 0.94873 71/78 [==========================>...] - ETA: 5:47 - loss: 0.3901 - accuracy: 0.9363 - mean_iou: 0.5806 Epoch 00003: accuracy did not improve from 0.94873 72/78 [==========================>...] - ETA: 4:57 - loss: 0.3901 - accuracy: 0.9364 - mean_iou: 0.5805 Epoch 00003: accuracy did not improve from 0.94873 73/78 [===========================>..] - ETA: 4:08 - loss: 0.3902 - accuracy: 0.9364 - mean_iou: 0.5795 Epoch 00003: accuracy did not improve from 0.94873 74/78 [===========================>..] - ETA: 3:18 - loss: 0.3901 - accuracy: 0.9363 - mean_iou: 0.5798 Epoch 00003: accuracy did not improve from 0.94873 75/78 [===========================>..] - ETA: 2:28 - loss: 0.3909 - accuracy: 0.9363 - mean_iou: 0.5795 Epoch 00003: accuracy did not improve from 0.94873 76/78 [============================>.] - ETA: 1:38 - loss: 0.3914 - accuracy: 0.9362 - mean_iou: 0.5798 Epoch 00003: accuracy did not improve from 0.94873 77/78 [============================>.] - ETA: 49s - loss: 0.3919 - accuracy: 0.9362 - mean_iou: 0.5786 WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. Epoch 00003: accuracy did not improve from 0.94873 78/78 [==============================] - ETA: 0s - loss: 0.3919 - accuracy: 0.9362 - mean_iou: 0.5779 WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. 78/78 [==============================] - 4141s 53s/step - loss: 0.3919 - accuracy: 0.9362 - mean_iou: 0.5779 - val_loss: 0.4271 - val_accuracy: 0.9390 - val_mean_iou: 0.5421 - lr: 9.8429e-04 Epoch 4/6 WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. Epoch 00004: accuracy did not improve from 0.94873 1/78 [..............................] - ETA: 0s - loss: 0.3876 - accuracy: 0.9384 - mean_iou: 0.5804 Epoch 00004: accuracy did not improve from 0.94873 2/78 [..............................] - ETA: 32:16 - loss: 0.4070 - accuracy: 0.9361 - mean_iou: 0.5507 Epoch 00004: accuracy did not improve from 0.94873 3/78 [>.............................] - ETA: 42:25 - loss: 0.4017 - accuracy: 0.9353 - mean_iou: 0.5642 Epoch 00004: accuracy did not improve from 0.94873 4/78 [>.............................] - ETA: 47:05 - loss: 0.3877 - accuracy: 0.9381 - mean_iou: 0.5860 Epoch 00004: accuracy did not improve from 0.94873 5/78 [>.............................] - ETA: 49:40 - loss: 0.3888 - accuracy: 0.9358 - mean_iou: 0.5945 Epoch 00004: accuracy did not improve from 0.94873 6/78 [=>............................] - ETA: 50:54 - loss: 0.3840 - accuracy: 0.9372 - mean_iou: 0.5990 Epoch 00004: accuracy did not improve from 0.94873 7/78 [=>............................] - ETA: 51:36 - loss: 0.3993 - accuracy: 0.9328 - mean_iou: 0.5746 Epoch 00004: accuracy did not improve from 0.94873 8/78 [==>...........................] - ETA: 51:52 - loss: 0.3973 - accuracy: 0.9330 - mean_iou: 0.5850 Epoch 00004: accuracy did not improve from 0.94873 9/78 [==>...........................] - ETA: 51:54 - loss: 0.3929 - accuracy: 0.9331 - mean_iou: 0.5911 Epoch 00004: accuracy did not improve from 0.94873 10/78 [==>...........................] - ETA: 51:47 - loss: 0.3918 - accuracy: 0.9326 - mean_iou: 0.5955 Epoch 00004: accuracy did not improve from 0.94873 11/78 [===>..........................] - ETA: 51:34 - loss: 0.3933 - accuracy: 0.9323 - mean_iou: 0.5939 Epoch 00004: accuracy did not improve from 0.94873 12/78 [===>..........................] - ETA: 51:11 - loss: 0.3939 - accuracy: 0.9314 - mean_iou: 0.5933 Epoch 00004: accuracy did not improve from 0.94873 13/78 [====>.........................] - ETA: 50:44 - loss: 0.3919 - accuracy: 0.9311 - mean_iou: 0.5906 Epoch 00004: accuracy did not improve from 0.94873 14/78 [====>.........................] - ETA: 50:13 - loss: 0.3944 - accuracy: 0.9307 - mean_iou: 0.5876 Epoch 00004: accuracy did not improve from 0.94873 15/78 [====>.........................] - ETA: 49:39 - loss: 0.3997 - accuracy: 0.9295 - mean_iou: 0.5811 Epoch 00004: accuracy did not improve from 0.94873 16/78 [=====>........................] - ETA: 49:06 - loss: 0.4031 - accuracy: 0.9294 - mean_iou: 0.5731 Epoch 00004: accuracy did not improve from 0.94873 17/78 [=====>........................] - ETA: 48:29 - loss: 0.4010 - accuracy: 0.9300 - mean_iou: 0.5721 Epoch 00004: accuracy did not improve from 0.94873 18/78 [=====>........................] - ETA: 47:51 - loss: 0.4008 - accuracy: 0.9307 - mean_iou: 0.5653 Epoch 00004: accuracy did not improve from 0.94873 19/78 [======>.......................] - ETA: 47:12 - loss: 0.4017 - accuracy: 0.9314 - mean_iou: 0.5630 Epoch 00004: accuracy did not improve from 0.94873 20/78 [======>.......................] - ETA: 46:31 - loss: 0.4006 - accuracy: 0.9316 - mean_iou: 0.5653 Epoch 00004: accuracy did not improve from 0.94873 21/78 [=======>......................] - ETA: 45:49 - loss: 0.3994 - accuracy: 0.9324 - mean_iou: 0.5709 Epoch 00004: accuracy did not improve from 0.94873 22/78 [=======>......................] - ETA: 45:07 - loss: 0.3991 - accuracy: 0.9328 - mean_iou: 0.5714 Epoch 00004: accuracy did not improve from 0.94873 23/78 [=======>......................] - ETA: 44:25 - loss: 0.3968 - accuracy: 0.9339 - mean_iou: 0.5762 Epoch 00004: accuracy did not improve from 0.94873 24/78 [========>.....................] - ETA: 43:41 - loss: 0.3957 - accuracy: 0.9346 - mean_iou: 0.5796 Epoch 00004: accuracy did not improve from 0.94873 25/78 [========>.....................] - ETA: 42:57 - loss: 0.3972 - accuracy: 0.9342 - mean_iou: 0.5792 Epoch 00004: accuracy did not improve from 0.94873 26/78 [=========>....................] - ETA: 42:11 - loss: 0.3964 - accuracy: 0.9346 - mean_iou: 0.5775 Epoch 00004: accuracy did not improve from 0.94873 27/78 [=========>....................] - ETA: 41:25 - loss: 0.3944 - accuracy: 0.9351 - mean_iou: 0.5776 Epoch 00004: accuracy did not improve from 0.94873 28/78 [=========>....................] - ETA: 40:39 - loss: 0.3939 - accuracy: 0.9350 - mean_iou: 0.5783 Epoch 00004: accuracy did not improve from 0.94873 29/78 [==========>...................] - ETA: 39:53 - loss: 0.3944 - accuracy: 0.9346 - mean_iou: 0.5781 Epoch 00004: accuracy did not improve from 0.94873 30/78 [==========>...................] - ETA: 39:07 - loss: 0.3928 - accuracy: 0.9349 - mean_iou: 0.5792 Epoch 00004: accuracy did not improve from 0.94873 31/78 [==========>...................] - ETA: 38:20 - loss: 0.3931 - accuracy: 0.9346 - mean_iou: 0.5776 Epoch 00004: accuracy did not improve from 0.94873 32/78 [===========>..................] - ETA: 37:33 - loss: 0.3956 - accuracy: 0.9346 - mean_iou: 0.5761 Epoch 00004: accuracy did not improve from 0.94873 33/78 [===========>..................] - ETA: 36:46 - loss: 0.3950 - accuracy: 0.9343 - mean_iou: 0.5759 Epoch 00004: accuracy did not improve from 0.94873 34/78 [============>.................] - ETA: 35:58 - loss: 0.3936 - accuracy: 0.9345 - mean_iou: 0.5776 Epoch 00004: accuracy did not improve from 0.94873 35/78 [============>.................] - ETA: 35:11 - loss: 0.3951 - accuracy: 0.9344 - mean_iou: 0.5787 Epoch 00004: accuracy did not improve from 0.94873 36/78 [============>.................] - ETA: 34:24 - loss: 0.3947 - accuracy: 0.9346 - mean_iou: 0.5775 Epoch 00004: accuracy did not improve from 0.94873 37/78 [=============>................] - ETA: 33:35 - loss: 0.3929 - accuracy: 0.9350 - mean_iou: 0.5795 Epoch 00004: accuracy did not improve from 0.94873 38/78 [=============>................] - ETA: 32:48 - loss: 0.3928 - accuracy: 0.9352 - mean_iou: 0.5812 Epoch 00004: accuracy did not improve from 0.94873 39/78 [==============>...............] - ETA: 32:00 - loss: 0.3926 - accuracy: 0.9354 - mean_iou: 0.5793 Epoch 00004: accuracy did not improve from 0.94873 40/78 [==============>...............] - ETA: 31:12 - loss: 0.3919 - accuracy: 0.9355 - mean_iou: 0.5812 Epoch 00004: accuracy did not improve from 0.94873 41/78 [==============>...............] - ETA: 30:24 - loss: 0.3931 - accuracy: 0.9357 - mean_iou: 0.5809 Epoch 00004: accuracy did not improve from 0.94873 42/78 [===============>..............] - ETA: 29:36 - loss: 0.3930 - accuracy: 0.9356 - mean_iou: 0.5831 Epoch 00004: accuracy did not improve from 0.94873 43/78 [===============>..............] - ETA: 28:47 - loss: 0.3939 - accuracy: 0.9355 - mean_iou: 0.5823 Epoch 00004: accuracy did not improve from 0.94873 44/78 [===============>..............] - ETA: 27:59 - loss: 0.3933 - accuracy: 0.9358 - mean_iou: 0.5847 Epoch 00004: accuracy did not improve from 0.94873 45/78 [================>.............] - ETA: 27:10 - loss: 0.3929 - accuracy: 0.9361 - mean_iou: 0.5858 Epoch 00004: accuracy did not improve from 0.94873 46/78 [================>.............] - ETA: 26:22 - loss: 0.3919 - accuracy: 0.9362 - mean_iou: 0.5870 Epoch 00004: accuracy did not improve from 0.94873 47/78 [=================>............] - ETA: 25:33 - loss: 0.3908 - accuracy: 0.9365 - mean_iou: 0.5883 Epoch 00004: accuracy did not improve from 0.94873 48/78 [=================>............] - ETA: 24:45 - loss: 0.3907 - accuracy: 0.9365 - mean_iou: 0.5892 Epoch 00004: accuracy did not improve from 0.94873 49/78 [=================>............] - ETA: 23:56 - loss: 0.3914 - accuracy: 0.9365 - mean_iou: 0.5887 Epoch 00004: accuracy did not improve from 0.94873 50/78 [==================>...........] - ETA: 23:07 - loss: 0.3913 - accuracy: 0.9366 - mean_iou: 0.5880 Epoch 00004: accuracy did not improve from 0.94873 51/78 [==================>...........] - ETA: 22:18 - loss: 0.3909 - accuracy: 0.9368 - mean_iou: 0.5872 Epoch 00004: accuracy did not improve from 0.94873 52/78 [===================>..........] - ETA: 21:29 - loss: 0.3891 - accuracy: 0.9373 - mean_iou: 0.5898 Epoch 00004: accuracy did not improve from 0.94873 53/78 [===================>..........] - ETA: 20:40 - loss: 0.3888 - accuracy: 0.9375 - mean_iou: 0.5908 Epoch 00004: accuracy did not improve from 0.94873 54/78 [===================>..........] - ETA: 19:50 - loss: 0.3886 - accuracy: 0.9376 - mean_iou: 0.5900 Epoch 00004: accuracy did not improve from 0.94873 55/78 [====================>.........] - ETA: 19:01 - loss: 0.3894 - accuracy: 0.9374 - mean_iou: 0.5885 Epoch 00004: accuracy did not improve from 0.94873 56/78 [====================>.........] - ETA: 18:12 - loss: 0.3888 - accuracy: 0.9376 - mean_iou: 0.5898 Epoch 00004: accuracy did not improve from 0.94873 57/78 [====================>.........] - ETA: 17:22 - loss: 0.3890 - accuracy: 0.9377 - mean_iou: 0.5899 Epoch 00004: accuracy did not improve from 0.94873 58/78 [=====================>........] - ETA: 16:33 - loss: 0.3895 - accuracy: 0.9375 - mean_iou: 0.5894 Epoch 00004: accuracy did not improve from 0.94873 59/78 [=====================>........] - ETA: 15:44 - loss: 0.3901 - accuracy: 0.9373 - mean_iou: 0.5874 Epoch 00004: accuracy did not improve from 0.94873 60/78 [======================>.......] - ETA: 14:54 - loss: 0.3894 - accuracy: 0.9374 - mean_iou: 0.5871 Epoch 00004: accuracy did not improve from 0.94873 61/78 [======================>.......] - ETA: 14:05 - loss: 0.3894 - accuracy: 0.9373 - mean_iou: 0.5867 Epoch 00004: accuracy did not improve from 0.94873 62/78 [======================>.......] - ETA: 13:15 - loss: 0.3894 - accuracy: 0.9373 - mean_iou: 0.5846 Epoch 00004: accuracy did not improve from 0.94873 63/78 [=======================>......] - ETA: 12:26 - loss: 0.3899 - accuracy: 0.9371 - mean_iou: 0.5825 Epoch 00004: accuracy did not improve from 0.94873 64/78 [=======================>......] - ETA: 11:37 - loss: 0.3905 - accuracy: 0.9368 - mean_iou: 0.5814 Epoch 00004: accuracy did not improve from 0.94873 65/78 [========================>.....] - ETA: 10:47 - loss: 0.3904 - accuracy: 0.9369 - mean_iou: 0.5804 Epoch 00004: accuracy did not improve from 0.94873 66/78 [========================>.....] - ETA: 9:57 - loss: 0.3901 - accuracy: 0.9369 - mean_iou: 0.5789 Epoch 00004: accuracy did not improve from 0.94873 67/78 [========================>.....] - ETA: 9:08 - loss: 0.3904 - accuracy: 0.9368 - mean_iou: 0.5779 Epoch 00004: accuracy did not improve from 0.94873 68/78 [=========================>....] - ETA: 8:17 - loss: 0.3912 - accuracy: 0.9366 - mean_iou: 0.5776 Epoch 00004: accuracy did not improve from 0.94873 69/78 [=========================>....] - ETA: 7:27 - loss: 0.3916 - accuracy: 0.9367 - mean_iou: 0.5780 Epoch 00004: accuracy did not improve from 0.94873 70/78 [=========================>....] - ETA: 6:37 - loss: 0.3912 - accuracy: 0.9369 - mean_iou: 0.5790 Epoch 00004: accuracy did not improve from 0.94873 71/78 [==========================>...] - ETA: 5:47 - loss: 0.3915 - accuracy: 0.9368 - mean_iou: 0.5802 Epoch 00004: accuracy did not improve from 0.94873 72/78 [==========================>...] - ETA: 4:57 - loss: 0.3921 - accuracy: 0.9365 - mean_iou: 0.5803 Epoch 00004: accuracy did not improve from 0.94873 73/78 [===========================>..] - ETA: 4:07 - loss: 0.3922 - accuracy: 0.9364 - mean_iou: 0.5796 Epoch 00004: accuracy did not improve from 0.94873 74/78 [===========================>..] - ETA: 3:18 - loss: 0.3914 - accuracy: 0.9365 - mean_iou: 0.5806 Epoch 00004: accuracy did not improve from 0.94873 75/78 [===========================>..] - ETA: 2:28 - loss: 0.3910 - accuracy: 0.9366 - mean_iou: 0.5803 Epoch 00004: accuracy did not improve from 0.94873 76/78 [============================>.] - ETA: 1:38 - loss: 0.3912 - accuracy: 0.9365 - mean_iou: 0.5790 Epoch 00004: accuracy did not improve from 0.94873 77/78 [============================>.] - ETA: 49s - loss: 0.3910 - accuracy: 0.9366 - mean_iou: 0.5806 WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. Epoch 00004: accuracy did not improve from 0.94873 78/78 [==============================] - ETA: 0s - loss: 0.3908 - accuracy: 0.9366 - mean_iou: 0.5795 WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. 78/78 [==============================] - 4139s 53s/step - loss: 0.3908 - accuracy: 0.9366 - mean_iou: 0.5795 - val_loss: 0.4269 - val_accuracy: 0.9307 - val_mean_iou: 0.4389 - lr: 9.6489e-04 Epoch 5/6 WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. Epoch 00005: accuracy did not improve from 0.94873 1/78 [..............................] - ETA: 0s - loss: 0.4689 - accuracy: 0.9227 - mean_iou: 0.3598 Epoch 00005: accuracy did not improve from 0.94873 2/78 [..............................] - ETA: 32:00 - loss: 0.4260 - accuracy: 0.9330 - mean_iou: 0.4379 Epoch 00005: accuracy did not improve from 0.94873 3/78 [>.............................] - ETA: 42:01 - loss: 0.4156 - accuracy: 0.9312 - mean_iou: 0.4666 Epoch 00005: accuracy did not improve from 0.94873 4/78 [>.............................] - ETA: 46:50 - loss: 0.4174 - accuracy: 0.9304 - mean_iou: 0.4891 Epoch 00005: accuracy did not improve from 0.94873 5/78 [>.............................] - ETA: 49:21 - loss: 0.4069 - accuracy: 0.9319 - mean_iou: 0.5120 Epoch 00005: accuracy did not improve from 0.94873 6/78 [=>............................] - ETA: 50:45 - loss: 0.4037 - accuracy: 0.9331 - mean_iou: 0.5114 Epoch 00005: accuracy did not improve from 0.94873 7/78 [=>............................] - ETA: 51:27 - loss: 0.3981 - accuracy: 0.9360 - mean_iou: 0.5168 Epoch 00005: accuracy did not improve from 0.94873 8/78 [==>...........................] - ETA: 51:49 - loss: 0.3999 - accuracy: 0.9361 - mean_iou: 0.5249 Epoch 00005: accuracy did not improve from 0.94873 9/78 [==>...........................] - ETA: 51:55 - loss: 0.3969 - accuracy: 0.9379 - mean_iou: 0.5497 Epoch 00005: accuracy did not improve from 0.94873 10/78 [==>...........................] - ETA: 51:47 - loss: 0.3952 - accuracy: 0.9391 - mean_iou: 0.5599 Epoch 00005: accuracy did not improve from 0.94873 11/78 [===>..........................] - ETA: 51:34 - loss: 0.3866 - accuracy: 0.9411 - mean_iou: 0.5781 Epoch 00005: accuracy did not improve from 0.94873 12/78 [===>..........................] - ETA: 51:17 - loss: 0.3906 - accuracy: 0.9397 - mean_iou: 0.5752 Epoch 00005: accuracy did not improve from 0.94873 13/78 [====>.........................] - ETA: 50:53 - loss: 0.3903 - accuracy: 0.9391 - mean_iou: 0.5850 Epoch 00005: accuracy did not improve from 0.94873 14/78 [====>.........................] - ETA: 50:23 - loss: 0.3861 - accuracy: 0.9404 - mean_iou: 0.5933 Epoch 00005: accuracy did not improve from 0.94873 15/78 [====>.........................] - ETA: 49:51 - loss: 0.3865 - accuracy: 0.9400 - mean_iou: 0.5973 Epoch 00005: accuracy did not improve from 0.94873 16/78 [=====>........................] - ETA: 49:18 - loss: 0.3865 - accuracy: 0.9402 - mean_iou: 0.6021 Epoch 00005: accuracy did not improve from 0.94873 17/78 [=====>........................] - ETA: 48:42 - loss: 0.3849 - accuracy: 0.9403 - mean_iou: 0.6041 Epoch 00005: accuracy did not improve from 0.94873 18/78 [=====>........................] - ETA: 48:04 - loss: 0.3851 - accuracy: 0.9395 - mean_iou: 0.6016 Epoch 00005: accuracy did not improve from 0.94873 19/78 [======>.......................] - ETA: 47:25 - loss: 0.3846 - accuracy: 0.9392 - mean_iou: 0.6014 Epoch 00005: accuracy did not improve from 0.94873 20/78 [======>.......................] - ETA: 46:44 - loss: 0.3847 - accuracy: 0.9388 - mean_iou: 0.6020 Epoch 00005: accuracy did not improve from 0.94873 21/78 [=======>......................] - ETA: 46:02 - loss: 0.3832 - accuracy: 0.9388 - mean_iou: 0.6011 Epoch 00005: accuracy did not improve from 0.94873 22/78 [=======>......................] - ETA: 45:19 - loss: 0.3811 - accuracy: 0.9389 - mean_iou: 0.5963 Epoch 00005: accuracy did not improve from 0.94873 23/78 [=======>......................] - ETA: 44:37 - loss: 0.3813 - accuracy: 0.9388 - mean_iou: 0.5936 Epoch 00005: accuracy did not improve from 0.94873 24/78 [========>.....................] - ETA: 43:52 - loss: 0.3804 - accuracy: 0.9386 - mean_iou: 0.5971 Epoch 00005: accuracy did not improve from 0.94873 25/78 [========>.....................] - ETA: 43:09 - loss: 0.3801 - accuracy: 0.9384 - mean_iou: 0.5952 Epoch 00005: accuracy did not improve from 0.94873 26/78 [=========>....................] - ETA: 42:23 - loss: 0.3809 - accuracy: 0.9382 - mean_iou: 0.5929 Epoch 00005: accuracy did not improve from 0.94873 27/78 [=========>....................] - ETA: 41:38 - loss: 0.3803 - accuracy: 0.9383 - mean_iou: 0.5921 Epoch 00005: accuracy did not improve from 0.94873 28/78 [=========>....................] - ETA: 40:54 - loss: 0.3805 - accuracy: 0.9382 - mean_iou: 0.5900 Epoch 00005: accuracy did not improve from 0.94873 29/78 [==========>...................] - ETA: 40:07 - loss: 0.3809 - accuracy: 0.9385 - mean_iou: 0.5912 Epoch 00005: accuracy did not improve from 0.94873 30/78 [==========>...................] - ETA: 39:21 - loss: 0.3822 - accuracy: 0.9381 - mean_iou: 0.5885 Epoch 00005: accuracy did not improve from 0.94873 31/78 [==========>...................] - ETA: 38:35 - loss: 0.3825 - accuracy: 0.9380 - mean_iou: 0.5882 Epoch 00005: accuracy did not improve from 0.94873 32/78 [===========>..................] - ETA: 37:48 - loss: 0.3821 - accuracy: 0.9381 - mean_iou: 0.5868 Epoch 00005: accuracy did not improve from 0.94873 33/78 [===========>..................] - ETA: 37:01 - loss: 0.3822 - accuracy: 0.9379 - mean_iou: 0.5868 Epoch 00005: accuracy did not improve from 0.94873 34/78 [============>.................] - ETA: 36:13 - loss: 0.3817 - accuracy: 0.9379 - mean_iou: 0.5885 Epoch 00005: accuracy did not improve from 0.94873 35/78 [============>.................] - ETA: 35:25 - loss: 0.3809 - accuracy: 0.9380 - mean_iou: 0.5889 Epoch 00005: accuracy did not improve from 0.94873 36/78 [============>.................] - ETA: 34:37 - loss: 0.3812 - accuracy: 0.9383 - mean_iou: 0.5884 Epoch 00005: accuracy did not improve from 0.94873 37/78 [=============>................] - ETA: 33:49 - loss: 0.3815 - accuracy: 0.9382 - mean_iou: 0.5889 Epoch 00005: accuracy did not improve from 0.94873 38/78 [=============>................] - ETA: 33:01 - loss: 0.3833 - accuracy: 0.9380 - mean_iou: 0.5839 Epoch 00005: accuracy did not improve from 0.94873 39/78 [==============>...............] - ETA: 32:12 - loss: 0.3825 - accuracy: 0.9383 - mean_iou: 0.5849 Epoch 00005: accuracy did not improve from 0.94873 40/78 [==============>...............] - ETA: 31:24 - loss: 0.3835 - accuracy: 0.9385 - mean_iou: 0.5842 Epoch 00005: accuracy did not improve from 0.94873 41/78 [==============>...............] - ETA: 30:35 - loss: 0.3837 - accuracy: 0.9386 - mean_iou: 0.5851 Epoch 00005: accuracy did not improve from 0.94873 42/78 [===============>..............] - ETA: 29:46 - loss: 0.3830 - accuracy: 0.9387 - mean_iou: 0.5859 Epoch 00005: accuracy did not improve from 0.94873 43/78 [===============>..............] - ETA: 28:57 - loss: 0.3823 - accuracy: 0.9391 - mean_iou: 0.5869 Epoch 00005: accuracy did not improve from 0.94873 44/78 [===============>..............] - ETA: 28:08 - loss: 0.3859 - accuracy: 0.9377 - mean_iou: 0.5850 Epoch 00005: accuracy did not improve from 0.94873 45/78 [================>.............] - ETA: 27:19 - loss: 0.3869 - accuracy: 0.9376 - mean_iou: 0.5848 Epoch 00005: accuracy did not improve from 0.94873 46/78 [================>.............] - ETA: 26:31 - loss: 0.3864 - accuracy: 0.9376 - mean_iou: 0.5857 Epoch 00005: accuracy did not improve from 0.94873 47/78 [=================>............] - ETA: 25:41 - loss: 0.3859 - accuracy: 0.9377 - mean_iou: 0.5869 Epoch 00005: accuracy did not improve from 0.94873 48/78 [=================>............] - ETA: 24:52 - loss: 0.3856 - accuracy: 0.9377 - mean_iou: 0.5852 Epoch 00005: accuracy did not improve from 0.94873 49/78 [=================>............] - ETA: 24:03 - loss: 0.3852 - accuracy: 0.9377 - mean_iou: 0.5847 Epoch 00005: accuracy did not improve from 0.94873 50/78 [==================>...........] - ETA: 23:14 - loss: 0.3861 - accuracy: 0.9373 - mean_iou: 0.5817 Epoch 00005: accuracy did not improve from 0.94873 51/78 [==================>...........] - ETA: 22:24 - loss: 0.3865 - accuracy: 0.9373 - mean_iou: 0.5808 Epoch 00005: accuracy did not improve from 0.94873 52/78 [===================>..........] - ETA: 21:35 - loss: 0.3863 - accuracy: 0.9376 - mean_iou: 0.5805 Epoch 00005: accuracy did not improve from 0.94873 53/78 [===================>..........] - ETA: 20:46 - loss: 0.3867 - accuracy: 0.9374 - mean_iou: 0.5797 Epoch 00005: accuracy did not improve from 0.94873 54/78 [===================>..........] - ETA: 19:56 - loss: 0.3862 - accuracy: 0.9376 - mean_iou: 0.5801 Epoch 00005: accuracy did not improve from 0.94873 55/78 [====================>.........] - ETA: 19:07 - loss: 0.3857 - accuracy: 0.9377 - mean_iou: 0.5797 Epoch 00005: accuracy did not improve from 0.94873 56/78 [====================>.........] - ETA: 18:17 - loss: 0.3864 - accuracy: 0.9376 - mean_iou: 0.5793 Epoch 00005: accuracy did not improve from 0.94873 57/78 [====================>.........] - ETA: 17:28 - loss: 0.3862 - accuracy: 0.9378 - mean_iou: 0.5789 Epoch 00005: accuracy did not improve from 0.94873 58/78 [=====================>........] - ETA: 16:38 - loss: 0.3864 - accuracy: 0.9376 - mean_iou: 0.5785 Epoch 00005: accuracy did not improve from 0.94873 59/78 [=====================>........] - ETA: 15:49 - loss: 0.3869 - accuracy: 0.9377 - mean_iou: 0.5779 Epoch 00005: accuracy did not improve from 0.94873 60/78 [======================>.......] - ETA: 14:59 - loss: 0.3878 - accuracy: 0.9375 - mean_iou: 0.5774 Epoch 00005: accuracy did not improve from 0.94873 61/78 [======================>.......] - ETA: 14:09 - loss: 0.3871 - accuracy: 0.9376 - mean_iou: 0.5781 Epoch 00005: accuracy did not improve from 0.94873 62/78 [======================>.......] - ETA: 13:19 - loss: 0.3865 - accuracy: 0.9378 - mean_iou: 0.5803 Epoch 00005: accuracy did not improve from 0.94873 63/78 [=======================>......] - ETA: 12:30 - loss: 0.3865 - accuracy: 0.9378 - mean_iou: 0.5798 Epoch 00005: accuracy did not improve from 0.94873 64/78 [=======================>......] - ETA: 11:40 - loss: 0.3868 - accuracy: 0.9377 - mean_iou: 0.5782 Epoch 00005: accuracy did not improve from 0.94873 65/78 [========================>.....] - ETA: 10:50 - loss: 0.3859 - accuracy: 0.9379 - mean_iou: 0.5799 Epoch 00005: accuracy did not improve from 0.94873 66/78 [========================>.....] - ETA: 10:00 - loss: 0.3852 - accuracy: 0.9380 - mean_iou: 0.5818 Epoch 00005: accuracy did not improve from 0.94873 67/78 [========================>.....] - ETA: 9:10 - loss: 0.3852 - accuracy: 0.9382 - mean_iou: 0.5817 Epoch 00005: accuracy did not improve from 0.94873 68/78 [=========================>....] - ETA: 8:20 - loss: 0.3853 - accuracy: 0.9382 - mean_iou: 0.5816 Epoch 00005: accuracy did not improve from 0.94873 69/78 [=========================>....] - ETA: 7:29 - loss: 0.3847 - accuracy: 0.9383 - mean_iou: 0.5836 Epoch 00005: accuracy did not improve from 0.94873 70/78 [=========================>....] - ETA: 6:39 - loss: 0.3851 - accuracy: 0.9382 - mean_iou: 0.5830 Epoch 00005: accuracy did not improve from 0.94873 71/78 [==========================>...] - ETA: 5:49 - loss: 0.3854 - accuracy: 0.9382 - mean_iou: 0.5829 Epoch 00005: accuracy did not improve from 0.94873 72/78 [==========================>...] - ETA: 4:59 - loss: 0.3855 - accuracy: 0.9383 - mean_iou: 0.5828 Epoch 00005: accuracy did not improve from 0.94873 73/78 [===========================>..] - ETA: 4:09 - loss: 0.3859 - accuracy: 0.9381 - mean_iou: 0.5824 Epoch 00005: accuracy did not improve from 0.94873 74/78 [===========================>..] - ETA: 3:19 - loss: 0.3861 - accuracy: 0.9380 - mean_iou: 0.5830 Epoch 00005: accuracy did not improve from 0.94873 75/78 [===========================>..] - ETA: 2:29 - loss: 0.3849 - accuracy: 0.9382 - mean_iou: 0.5834 Epoch 00005: accuracy did not improve from 0.94873 76/78 [============================>.] - ETA: 1:39 - loss: 0.3846 - accuracy: 0.9382 - mean_iou: 0.5833 Epoch 00005: accuracy did not improve from 0.94873 77/78 [============================>.] - ETA: 49s - loss: 0.3846 - accuracy: 0.9383 - mean_iou: 0.5828 WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. Epoch 00005: accuracy did not improve from 0.94873 78/78 [==============================] - ETA: 0s - loss: 0.3852 - accuracy: 0.9380 - mean_iou: 0.5819 WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. 78/78 [==============================] - 4161s 53s/step - loss: 0.3852 - accuracy: 0.9380 - mean_iou: 0.5819 - val_loss: 0.4410 - val_accuracy: 0.9433 - val_mean_iou: 0.5734 - lr: 9.3815e-04 Epoch 6/6 WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. Epoch 00006: accuracy did not improve from 0.94873 1/78 [..............................] - ETA: 0s - loss: 0.3951 - accuracy: 0.9246 - mean_iou: 0.5942 Epoch 00006: accuracy did not improve from 0.94873 2/78 [..............................] - ETA: 32:10 - loss: 0.3942 - accuracy: 0.9266 - mean_iou: 0.5603 Epoch 00006: accuracy did not improve from 0.94873 3/78 [>.............................] - ETA: 42:20 - loss: 0.3972 - accuracy: 0.9276 - mean_iou: 0.5554 Epoch 00006: accuracy did not improve from 0.94873 4/78 [>.............................] - ETA: 46:56 - loss: 0.3925 - accuracy: 0.9290 - mean_iou: 0.5520 Epoch 00006: accuracy did not improve from 0.94873 5/78 [>.............................] - ETA: 49:28 - loss: 0.3934 - accuracy: 0.9283 - mean_iou: 0.5256 Epoch 00006: accuracy did not improve from 0.94873 6/78 [=>............................] - ETA: 50:48 - loss: 0.3918 - accuracy: 0.9286 - mean_iou: 0.5168 Epoch 00006: accuracy did not improve from 0.94873 7/78 [=>............................] - ETA: 51:31 - loss: 0.3937 - accuracy: 0.9284 - mean_iou: 0.5118 Epoch 00006: accuracy did not improve from 0.94873 8/78 [==>...........................] - ETA: 51:51 - loss: 0.3898 - accuracy: 0.9303 - mean_iou: 0.5199 Epoch 00006: accuracy did not improve from 0.94873 9/78 [==>...........................] - ETA: 51:52 - loss: 0.3839 - accuracy: 0.9320 - mean_iou: 0.5310 Epoch 00006: accuracy did not improve from 0.94873 10/78 [==>...........................] - ETA: 51:49 - loss: 0.3830 - accuracy: 0.9316 - mean_iou: 0.5382 Epoch 00006: accuracy did not improve from 0.94873 11/78 [===>..........................] - ETA: 51:33 - loss: 0.3860 - accuracy: 0.9331 - mean_iou: 0.5514 Epoch 00006: accuracy did not improve from 0.94873 12/78 [===>..........................] - ETA: 51:11 - loss: 0.3838 - accuracy: 0.9342 - mean_iou: 0.5548 Epoch 00006: accuracy did not improve from 0.94873 13/78 [====>.........................] - ETA: 50:48 - loss: 0.3824 - accuracy: 0.9347 - mean_iou: 0.5575 Epoch 00006: accuracy did not improve from 0.94873 14/78 [====>.........................] - ETA: 50:20 - loss: 0.3800 - accuracy: 0.9358 - mean_iou: 0.5644 Epoch 00006: accuracy did not improve from 0.94873 15/78 [====>.........................] - ETA: 49:51 - loss: 0.3826 - accuracy: 0.9353 - mean_iou: 0.5645 Epoch 00006: accuracy did not improve from 0.94873 16/78 [=====>........................] - ETA: 49:17 - loss: 0.3796 - accuracy: 0.9358 - mean_iou: 0.5726 Epoch 00006: accuracy did not improve from 0.94873 17/78 [=====>........................] - ETA: 48:42 - loss: 0.3841 - accuracy: 0.9348 - mean_iou: 0.5711 Epoch 00006: accuracy did not improve from 0.94873 18/78 [=====>........................] - ETA: 48:06 - loss: 0.3861 - accuracy: 0.9354 - mean_iou: 0.5700 Epoch 00006: accuracy did not improve from 0.94873 19/78 [======>.......................] - ETA: 47:29 - loss: 0.3826 - accuracy: 0.9362 - mean_iou: 0.5735 Epoch 00006: accuracy did not improve from 0.94873 20/78 [======>.......................] - ETA: 46:49 - loss: 0.3830 - accuracy: 0.9356 - mean_iou: 0.5727 Epoch 00006: accuracy did not improve from 0.94873 21/78 [=======>......................] - ETA: 46:07 - loss: 0.3834 - accuracy: 0.9355 - mean_iou: 0.5699 Epoch 00006: accuracy did not improve from 0.94873 22/78 [=======>......................] - ETA: 45:26 - loss: 0.3826 - accuracy: 0.9355 - mean_iou: 0.5683 Epoch 00006: accuracy did not improve from 0.94873 23/78 [=======>......................] - ETA: 44:42 - loss: 0.3854 - accuracy: 0.9355 - mean_iou: 0.5620 Epoch 00006: accuracy did not improve from 0.94873 24/78 [========>.....................] - ETA: 44:00 - loss: 0.3841 - accuracy: 0.9361 - mean_iou: 0.5656 Epoch 00006: accuracy did not improve from 0.94873 25/78 [========>.....................] - ETA: 43:15 - loss: 0.3844 - accuracy: 0.9360 - mean_iou: 0.5656 Epoch 00006: accuracy did not improve from 0.94873 26/78 [=========>....................] - ETA: 42:30 - loss: 0.3856 - accuracy: 0.9361 - mean_iou: 0.5635 Epoch 00006: accuracy did not improve from 0.94873 27/78 [=========>....................] - ETA: 41:44 - loss: 0.3837 - accuracy: 0.9364 - mean_iou: 0.5679 Epoch 00006: accuracy did not improve from 0.94873 28/78 [=========>....................] - ETA: 40:58 - loss: 0.3844 - accuracy: 0.9367 - mean_iou: 0.5666 Epoch 00006: accuracy did not improve from 0.94873 29/78 [==========>...................] - ETA: 40:12 - loss: 0.3841 - accuracy: 0.9369 - mean_iou: 0.5678 Epoch 00006: accuracy did not improve from 0.94873 30/78 [==========>...................] - ETA: 39:25 - loss: 0.3844 - accuracy: 0.9365 - mean_iou: 0.5678 Epoch 00006: accuracy did not improve from 0.94873 31/78 [==========>...................] - ETA: 38:38 - loss: 0.3838 - accuracy: 0.9367 - mean_iou: 0.5696 Epoch 00006: accuracy did not improve from 0.94873 32/78 [===========>..................] - ETA: 37:51 - loss: 0.3830 - accuracy: 0.9370 - mean_iou: 0.5712 Epoch 00006: accuracy did not improve from 0.94873 33/78 [===========>..................] - ETA: 37:03 - loss: 0.3830 - accuracy: 0.9370 - mean_iou: 0.5719 Epoch 00006: accuracy did not improve from 0.94873 34/78 [============>.................] - ETA: 36:16 - loss: 0.3838 - accuracy: 0.9367 - mean_iou: 0.5727 Epoch 00006: accuracy did not improve from 0.94873 35/78 [============>.................] - ETA: 35:28 - loss: 0.3816 - accuracy: 0.9373 - mean_iou: 0.5757 Epoch 00006: accuracy did not improve from 0.94873 36/78 [============>.................] - ETA: 34:40 - loss: 0.3812 - accuracy: 0.9373 - mean_iou: 0.5782 Epoch 00006: accuracy did not improve from 0.94873 37/78 [=============>................] - ETA: 33:52 - loss: 0.3805 - accuracy: 0.9373 - mean_iou: 0.5800 Epoch 00006: accuracy did not improve from 0.94873 38/78 [=============>................] - ETA: 33:04 - loss: 0.3795 - accuracy: 0.9376 - mean_iou: 0.5824 Epoch 00006: accuracy did not improve from 0.94873 39/78 [==============>...............] - ETA: 32:15 - loss: 0.3811 - accuracy: 0.9374 - mean_iou: 0.5821 Epoch 00006: accuracy did not improve from 0.94873 40/78 [==============>...............] - ETA: 31:27 - loss: 0.3802 - accuracy: 0.9375 - mean_iou: 0.5823 Epoch 00006: accuracy did not improve from 0.94873 41/78 [==============>...............] - ETA: 30:38 - loss: 0.3793 - accuracy: 0.9378 - mean_iou: 0.5839 Epoch 00006: accuracy did not improve from 0.94873 42/78 [===============>..............] - ETA: 29:50 - loss: 0.3788 - accuracy: 0.9379 - mean_iou: 0.5855 Epoch 00006: accuracy did not improve from 0.94873 43/78 [===============>..............] - ETA: 29:01 - loss: 0.3781 - accuracy: 0.9383 - mean_iou: 0.5866 Epoch 00006: accuracy did not improve from 0.94873 44/78 [===============>..............] - ETA: 28:12 - loss: 0.3778 - accuracy: 0.9385 - mean_iou: 0.5878 Epoch 00006: accuracy did not improve from 0.94873 45/78 [================>.............] - ETA: 27:23 - loss: 0.3769 - accuracy: 0.9388 - mean_iou: 0.5874 Epoch 00006: accuracy did not improve from 0.94873 46/78 [================>.............] - ETA: 26:34 - loss: 0.3762 - accuracy: 0.9390 - mean_iou: 0.5886 Epoch 00006: accuracy did not improve from 0.94873 47/78 [=================>............] - ETA: 25:45 - loss: 0.3748 - accuracy: 0.9395 - mean_iou: 0.5918 Epoch 00006: accuracy did not improve from 0.94873 48/78 [=================>............] - ETA: 24:55 - loss: 0.3742 - accuracy: 0.9396 - mean_iou: 0.5927 Epoch 00006: accuracy did not improve from 0.94873 49/78 [=================>............] - ETA: 24:06 - loss: 0.3753 - accuracy: 0.9394 - mean_iou: 0.5915 Epoch 00006: accuracy did not improve from 0.94873 50/78 [==================>...........] - ETA: 23:17 - loss: 0.3745 - accuracy: 0.9396 - mean_iou: 0.5935 Epoch 00006: accuracy did not improve from 0.94873 51/78 [==================>...........] - ETA: 22:27 - loss: 0.3740 - accuracy: 0.9398 - mean_iou: 0.5947 Epoch 00006: accuracy did not improve from 0.94873 52/78 [===================>..........] - ETA: 21:38 - loss: 0.3744 - accuracy: 0.9401 - mean_iou: 0.5967 Epoch 00006: accuracy did not improve from 0.94873 53/78 [===================>..........] - ETA: 20:48 - loss: 0.3743 - accuracy: 0.9401 - mean_iou: 0.5954 Epoch 00006: accuracy did not improve from 0.94873 54/78 [===================>..........] - ETA: 19:59 - loss: 0.3744 - accuracy: 0.9401 - mean_iou: 0.5971 Epoch 00006: accuracy did not improve from 0.94873 55/78 [====================>.........] - ETA: 19:09 - loss: 0.3757 - accuracy: 0.9398 - mean_iou: 0.5942 Epoch 00006: accuracy did not improve from 0.94873 56/78 [====================>.........] - ETA: 18:19 - loss: 0.3751 - accuracy: 0.9400 - mean_iou: 0.5942 Epoch 00006: accuracy did not improve from 0.94873 57/78 [====================>.........] - ETA: 17:30 - loss: 0.3758 - accuracy: 0.9399 - mean_iou: 0.5932 Epoch 00006: accuracy did not improve from 0.94873 58/78 [=====================>........] - ETA: 16:40 - loss: 0.3755 - accuracy: 0.9401 - mean_iou: 0.5935 Epoch 00006: accuracy did not improve from 0.94873 59/78 [=====================>........] - ETA: 15:50 - loss: 0.3756 - accuracy: 0.9401 - mean_iou: 0.5934 Epoch 00006: accuracy did not improve from 0.94873 60/78 [======================>.......] - ETA: 15:00 - loss: 0.3757 - accuracy: 0.9401 - mean_iou: 0.5934 Epoch 00006: accuracy did not improve from 0.94873 61/78 [======================>.......] - ETA: 14:10 - loss: 0.3757 - accuracy: 0.9400 - mean_iou: 0.5941 Epoch 00006: accuracy did not improve from 0.94873 62/78 [======================>.......] - ETA: 13:20 - loss: 0.3753 - accuracy: 0.9402 - mean_iou: 0.5947 Epoch 00006: accuracy did not improve from 0.94873 63/78 [=======================>......] - ETA: 12:30 - loss: 0.3759 - accuracy: 0.9402 - mean_iou: 0.5942 Epoch 00006: accuracy did not improve from 0.94873 64/78 [=======================>......] - ETA: 11:40 - loss: 0.3762 - accuracy: 0.9401 - mean_iou: 0.5931 Epoch 00006: accuracy did not improve from 0.94873 65/78 [========================>.....] - ETA: 10:50 - loss: 0.3754 - accuracy: 0.9402 - mean_iou: 0.5937 Epoch 00006: accuracy did not improve from 0.94873 66/78 [========================>.....] - ETA: 10:00 - loss: 0.3762 - accuracy: 0.9401 - mean_iou: 0.5936 Epoch 00006: accuracy did not improve from 0.94873 67/78 [========================>.....] - ETA: 9:11 - loss: 0.3761 - accuracy: 0.9399 - mean_iou: 0.5941 Epoch 00006: accuracy did not improve from 0.94873 68/78 [=========================>....] - ETA: 8:20 - loss: 0.3762 - accuracy: 0.9400 - mean_iou: 0.5943 Epoch 00006: accuracy did not improve from 0.94873 69/78 [=========================>....] - ETA: 7:29 - loss: 0.3761 - accuracy: 0.9401 - mean_iou: 0.5939 Epoch 00006: accuracy did not improve from 0.94873 70/78 [=========================>....] - ETA: 6:39 - loss: 0.3753 - accuracy: 0.9401 - mean_iou: 0.5939 Epoch 00006: accuracy did not improve from 0.94873 71/78 [==========================>...] - ETA: 5:49 - loss: 0.3755 - accuracy: 0.9400 - mean_iou: 0.5923 Epoch 00006: accuracy did not improve from 0.94873 72/78 [==========================>...] - ETA: 4:59 - loss: 0.3762 - accuracy: 0.9401 - mean_iou: 0.5916 Epoch 00006: accuracy did not improve from 0.94873 73/78 [===========================>..] - ETA: 4:09 - loss: 0.3764 - accuracy: 0.9399 - mean_iou: 0.5914 Epoch 00006: accuracy did not improve from 0.94873 74/78 [===========================>..] - ETA: 3:19 - loss: 0.3759 - accuracy: 0.9400 - mean_iou: 0.5920 Epoch 00006: accuracy did not improve from 0.94873 75/78 [===========================>..] - ETA: 2:29 - loss: 0.3756 - accuracy: 0.9400 - mean_iou: 0.5927 Epoch 00006: accuracy did not improve from 0.94873 76/78 [============================>.] - ETA: 1:39 - loss: 0.3753 - accuracy: 0.9400 - mean_iou: 0.5940 Epoch 00006: accuracy did not improve from 0.94873 77/78 [============================>.] - ETA: 49s - loss: 0.3748 - accuracy: 0.9402 - mean_iou: 0.5948 WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. Epoch 00006: accuracy did not improve from 0.94873 78/78 [==============================] - ETA: 0s - loss: 0.3737 - accuracy: 0.9405 - mean_iou: 0.5968 WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. 78/78 [==============================] - 4173s 54s/step - loss: 0.3737 - accuracy: 0.9405 - mean_iou: 0.5968 - val_loss: 0.4118 - val_accuracy: 0.9390 - val_mean_iou: 0.5383 - lr: 9.0451e-04
model_json = model.to_json()
with open("model4000_6.json", "w") as json_file:
json_file.write(model_json)
model.save_weights("model4000_6.h5")
plt.figure(figsize=(12,4))
plt.subplot(131)
plt.plot(history.epoch, history.history["loss"], label="Train loss")
plt.plot(history.epoch, history.history["val_loss"], label="Valid loss")
plt.legend()
plt.subplot(132)
plt.plot(history.epoch, history.history["accuracy"], label="Train accuracy")
plt.plot(history.epoch, history.history["val_accuracy"], label="Valid accuracy")
plt.legend()
plt.subplot(133)
plt.plot(history.epoch, history.history["mean_iou"], label="Train iou")
plt.plot(history.epoch, history.history["val_mean_iou"], label="Valid iou")
plt.legend()
plt.show()
model.summary()
Model: "model"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) [(None, 256, 256, 1) 0
__________________________________________________________________________________________________
conv2d (Conv2D) (None, 256, 256, 32) 288 input_1[0][0]
__________________________________________________________________________________________________
batch_normalization (BatchNorma (None, 256, 256, 32) 128 conv2d[0][0]
__________________________________________________________________________________________________
leaky_re_lu (LeakyReLU) (None, 256, 256, 32) 0 batch_normalization[0][0]
__________________________________________________________________________________________________
conv2d_1 (Conv2D) (None, 256, 256, 64) 2048 leaky_re_lu[0][0]
__________________________________________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 128, 128, 64) 0 conv2d_1[0][0]
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 128, 128, 64) 256 max_pooling2d[0][0]
__________________________________________________________________________________________________
leaky_re_lu_1 (LeakyReLU) (None, 128, 128, 64) 0 batch_normalization_1[0][0]
__________________________________________________________________________________________________
conv2d_2 (Conv2D) (None, 128, 128, 64) 36864 leaky_re_lu_1[0][0]
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, 128, 128, 64) 256 conv2d_2[0][0]
__________________________________________________________________________________________________
leaky_re_lu_2 (LeakyReLU) (None, 128, 128, 64) 0 batch_normalization_2[0][0]
__________________________________________________________________________________________________
conv2d_3 (Conv2D) (None, 128, 128, 64) 36864 leaky_re_lu_2[0][0]
__________________________________________________________________________________________________
add (Add) (None, 128, 128, 64) 0 conv2d_3[0][0]
max_pooling2d[0][0]
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, 128, 128, 64) 256 add[0][0]
__________________________________________________________________________________________________
leaky_re_lu_3 (LeakyReLU) (None, 128, 128, 64) 0 batch_normalization_3[0][0]
__________________________________________________________________________________________________
conv2d_4 (Conv2D) (None, 128, 128, 64) 36864 leaky_re_lu_3[0][0]
__________________________________________________________________________________________________
batch_normalization_4 (BatchNor (None, 128, 128, 64) 256 conv2d_4[0][0]
__________________________________________________________________________________________________
leaky_re_lu_4 (LeakyReLU) (None, 128, 128, 64) 0 batch_normalization_4[0][0]
__________________________________________________________________________________________________
conv2d_5 (Conv2D) (None, 128, 128, 64) 36864 leaky_re_lu_4[0][0]
__________________________________________________________________________________________________
add_1 (Add) (None, 128, 128, 64) 0 conv2d_5[0][0]
add[0][0]
__________________________________________________________________________________________________
batch_normalization_5 (BatchNor (None, 128, 128, 64) 256 add_1[0][0]
__________________________________________________________________________________________________
leaky_re_lu_5 (LeakyReLU) (None, 128, 128, 64) 0 batch_normalization_5[0][0]
__________________________________________________________________________________________________
conv2d_6 (Conv2D) (None, 128, 128, 128 8192 leaky_re_lu_5[0][0]
__________________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D) (None, 64, 64, 128) 0 conv2d_6[0][0]
__________________________________________________________________________________________________
batch_normalization_6 (BatchNor (None, 64, 64, 128) 512 max_pooling2d_1[0][0]
__________________________________________________________________________________________________
leaky_re_lu_6 (LeakyReLU) (None, 64, 64, 128) 0 batch_normalization_6[0][0]
__________________________________________________________________________________________________
conv2d_7 (Conv2D) (None, 64, 64, 128) 147456 leaky_re_lu_6[0][0]
__________________________________________________________________________________________________
batch_normalization_7 (BatchNor (None, 64, 64, 128) 512 conv2d_7[0][0]
__________________________________________________________________________________________________
leaky_re_lu_7 (LeakyReLU) (None, 64, 64, 128) 0 batch_normalization_7[0][0]
__________________________________________________________________________________________________
conv2d_8 (Conv2D) (None, 64, 64, 128) 147456 leaky_re_lu_7[0][0]
__________________________________________________________________________________________________
add_2 (Add) (None, 64, 64, 128) 0 conv2d_8[0][0]
max_pooling2d_1[0][0]
__________________________________________________________________________________________________
batch_normalization_8 (BatchNor (None, 64, 64, 128) 512 add_2[0][0]
__________________________________________________________________________________________________
leaky_re_lu_8 (LeakyReLU) (None, 64, 64, 128) 0 batch_normalization_8[0][0]
__________________________________________________________________________________________________
conv2d_9 (Conv2D) (None, 64, 64, 128) 147456 leaky_re_lu_8[0][0]
__________________________________________________________________________________________________
batch_normalization_9 (BatchNor (None, 64, 64, 128) 512 conv2d_9[0][0]
__________________________________________________________________________________________________
leaky_re_lu_9 (LeakyReLU) (None, 64, 64, 128) 0 batch_normalization_9[0][0]
__________________________________________________________________________________________________
conv2d_10 (Conv2D) (None, 64, 64, 128) 147456 leaky_re_lu_9[0][0]
__________________________________________________________________________________________________
add_3 (Add) (None, 64, 64, 128) 0 conv2d_10[0][0]
add_2[0][0]
__________________________________________________________________________________________________
batch_normalization_10 (BatchNo (None, 64, 64, 128) 512 add_3[0][0]
__________________________________________________________________________________________________
leaky_re_lu_10 (LeakyReLU) (None, 64, 64, 128) 0 batch_normalization_10[0][0]
__________________________________________________________________________________________________
conv2d_11 (Conv2D) (None, 64, 64, 256) 32768 leaky_re_lu_10[0][0]
__________________________________________________________________________________________________
max_pooling2d_2 (MaxPooling2D) (None, 32, 32, 256) 0 conv2d_11[0][0]
__________________________________________________________________________________________________
batch_normalization_11 (BatchNo (None, 32, 32, 256) 1024 max_pooling2d_2[0][0]
__________________________________________________________________________________________________
leaky_re_lu_11 (LeakyReLU) (None, 32, 32, 256) 0 batch_normalization_11[0][0]
__________________________________________________________________________________________________
conv2d_12 (Conv2D) (None, 32, 32, 256) 589824 leaky_re_lu_11[0][0]
__________________________________________________________________________________________________
batch_normalization_12 (BatchNo (None, 32, 32, 256) 1024 conv2d_12[0][0]
__________________________________________________________________________________________________
leaky_re_lu_12 (LeakyReLU) (None, 32, 32, 256) 0 batch_normalization_12[0][0]
__________________________________________________________________________________________________
conv2d_13 (Conv2D) (None, 32, 32, 256) 589824 leaky_re_lu_12[0][0]
__________________________________________________________________________________________________
add_4 (Add) (None, 32, 32, 256) 0 conv2d_13[0][0]
max_pooling2d_2[0][0]
__________________________________________________________________________________________________
batch_normalization_13 (BatchNo (None, 32, 32, 256) 1024 add_4[0][0]
__________________________________________________________________________________________________
leaky_re_lu_13 (LeakyReLU) (None, 32, 32, 256) 0 batch_normalization_13[0][0]
__________________________________________________________________________________________________
conv2d_14 (Conv2D) (None, 32, 32, 256) 589824 leaky_re_lu_13[0][0]
__________________________________________________________________________________________________
batch_normalization_14 (BatchNo (None, 32, 32, 256) 1024 conv2d_14[0][0]
__________________________________________________________________________________________________
leaky_re_lu_14 (LeakyReLU) (None, 32, 32, 256) 0 batch_normalization_14[0][0]
__________________________________________________________________________________________________
conv2d_15 (Conv2D) (None, 32, 32, 256) 589824 leaky_re_lu_14[0][0]
__________________________________________________________________________________________________
add_5 (Add) (None, 32, 32, 256) 0 conv2d_15[0][0]
add_4[0][0]
__________________________________________________________________________________________________
batch_normalization_15 (BatchNo (None, 32, 32, 256) 1024 add_5[0][0]
__________________________________________________________________________________________________
leaky_re_lu_15 (LeakyReLU) (None, 32, 32, 256) 0 batch_normalization_15[0][0]
__________________________________________________________________________________________________
conv2d_16 (Conv2D) (None, 32, 32, 512) 131072 leaky_re_lu_15[0][0]
__________________________________________________________________________________________________
max_pooling2d_3 (MaxPooling2D) (None, 16, 16, 512) 0 conv2d_16[0][0]
__________________________________________________________________________________________________
batch_normalization_16 (BatchNo (None, 16, 16, 512) 2048 max_pooling2d_3[0][0]
__________________________________________________________________________________________________
leaky_re_lu_16 (LeakyReLU) (None, 16, 16, 512) 0 batch_normalization_16[0][0]
__________________________________________________________________________________________________
conv2d_17 (Conv2D) (None, 16, 16, 512) 2359296 leaky_re_lu_16[0][0]
__________________________________________________________________________________________________
batch_normalization_17 (BatchNo (None, 16, 16, 512) 2048 conv2d_17[0][0]
__________________________________________________________________________________________________
leaky_re_lu_17 (LeakyReLU) (None, 16, 16, 512) 0 batch_normalization_17[0][0]
__________________________________________________________________________________________________
conv2d_18 (Conv2D) (None, 16, 16, 512) 2359296 leaky_re_lu_17[0][0]
__________________________________________________________________________________________________
add_6 (Add) (None, 16, 16, 512) 0 conv2d_18[0][0]
max_pooling2d_3[0][0]
__________________________________________________________________________________________________
batch_normalization_18 (BatchNo (None, 16, 16, 512) 2048 add_6[0][0]
__________________________________________________________________________________________________
leaky_re_lu_18 (LeakyReLU) (None, 16, 16, 512) 0 batch_normalization_18[0][0]
__________________________________________________________________________________________________
conv2d_19 (Conv2D) (None, 16, 16, 512) 2359296 leaky_re_lu_18[0][0]
__________________________________________________________________________________________________
batch_normalization_19 (BatchNo (None, 16, 16, 512) 2048 conv2d_19[0][0]
__________________________________________________________________________________________________
leaky_re_lu_19 (LeakyReLU) (None, 16, 16, 512) 0 batch_normalization_19[0][0]
__________________________________________________________________________________________________
conv2d_20 (Conv2D) (None, 16, 16, 512) 2359296 leaky_re_lu_19[0][0]
__________________________________________________________________________________________________
add_7 (Add) (None, 16, 16, 512) 0 conv2d_20[0][0]
add_6[0][0]
__________________________________________________________________________________________________
batch_normalization_20 (BatchNo (None, 16, 16, 512) 2048 add_7[0][0]
__________________________________________________________________________________________________
leaky_re_lu_20 (LeakyReLU) (None, 16, 16, 512) 0 batch_normalization_20[0][0]
__________________________________________________________________________________________________
conv2d_21 (Conv2D) (None, 16, 16, 1) 513 leaky_re_lu_20[0][0]
__________________________________________________________________________________________________
up_sampling2d (UpSampling2D) (None, 256, 256, 1) 0 conv2d_21[0][0]
==================================================================================================
Total params: 12,727,969
Trainable params: 12,718,305
Non-trainable params: 9,664
__________________________________________________________________________________________________
history = model.fit_generator(train_gen, validation_data=valid_gen, callbacks=[learning_rate,checkpoint], epochs=8, workers=8, use_multiprocessing=True,initial_epoch=6)
Epoch 7/8 WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. Epoch 00007: accuracy improved from 0.94873 to 0.95261, saving model to 4000_checkpoint_1.hdf5 1/78 [..............................] - ETA: 0s - loss: 0.3139 - accuracy: 0.9526 - mean_iou: 0.6516 Epoch 00007: accuracy did not improve from 0.95261 2/78 [..............................] - ETA: 32:40 - loss: 0.3334 - accuracy: 0.9474 - mean_iou: 0.6426 Epoch 00007: accuracy did not improve from 0.95261 3/78 [>.............................] - ETA: 42:48 - loss: 0.3425 - accuracy: 0.9447 - mean_iou: 0.6287 Epoch 00007: accuracy did not improve from 0.95261 4/78 [>.............................] - ETA: 47:22 - loss: 0.3401 - accuracy: 0.9483 - mean_iou: 0.6380 Epoch 00007: accuracy did not improve from 0.95261 5/78 [>.............................] - ETA: 49:39 - loss: 0.3496 - accuracy: 0.9482 - mean_iou: 0.6261 Epoch 00007: accuracy did not improve from 0.95261 6/78 [=>............................] - ETA: 50:53 - loss: 0.3624 - accuracy: 0.9449 - mean_iou: 0.6133 Epoch 00007: accuracy did not improve from 0.95261 7/78 [=>............................] - ETA: 51:36 - loss: 0.3655 - accuracy: 0.9452 - mean_iou: 0.6192 Epoch 00007: accuracy did not improve from 0.95261 8/78 [==>...........................] - ETA: 51:58 - loss: 0.3652 - accuracy: 0.9454 - mean_iou: 0.6129 Epoch 00007: accuracy did not improve from 0.95261 9/78 [==>...........................] - ETA: 52:05 - loss: 0.3627 - accuracy: 0.9457 - mean_iou: 0.6222 Epoch 00007: accuracy did not improve from 0.95261 10/78 [==>...........................] - ETA: 51:55 - loss: 0.3577 - accuracy: 0.9462 - mean_iou: 0.6306 Epoch 00007: accuracy did not improve from 0.95261 11/78 [===>..........................] - ETA: 52:13 - loss: 0.3597 - accuracy: 0.9456 - mean_iou: 0.6293 Epoch 00007: accuracy did not improve from 0.95261 12/78 [===>..........................] - ETA: 51:50 - loss: 0.3568 - accuracy: 0.9463 - mean_iou: 0.6379 Epoch 00007: accuracy did not improve from 0.95261 13/78 [====>.........................] - ETA: 51:33 - loss: 0.3572 - accuracy: 0.9452 - mean_iou: 0.6321 Epoch 00007: accuracy did not improve from 0.95261 14/78 [====>.........................] - ETA: 50:58 - loss: 0.3575 - accuracy: 0.9453 - mean_iou: 0.6230 Epoch 00007: accuracy did not improve from 0.95261 15/78 [====>.........................] - ETA: 50:22 - loss: 0.3568 - accuracy: 0.9447 - mean_iou: 0.6178 Epoch 00007: accuracy did not improve from 0.95261 16/78 [=====>........................] - ETA: 49:46 - loss: 0.3598 - accuracy: 0.9445 - mean_iou: 0.6120 Epoch 00007: accuracy did not improve from 0.95261 17/78 [=====>........................] - ETA: 49:05 - loss: 0.3630 - accuracy: 0.9436 - mean_iou: 0.6110 Epoch 00007: accuracy did not improve from 0.95261 18/78 [=====>........................] - ETA: 48:23 - loss: 0.3640 - accuracy: 0.9428 - mean_iou: 0.6123 Epoch 00007: accuracy did not improve from 0.95261 19/78 [======>.......................] - ETA: 47:42 - loss: 0.3636 - accuracy: 0.9432 - mean_iou: 0.6114 Epoch 00007: accuracy did not improve from 0.95261 20/78 [======>.......................] - ETA: 46:59 - loss: 0.3643 - accuracy: 0.9426 - mean_iou: 0.6097 Epoch 00007: accuracy did not improve from 0.95261 21/78 [=======>......................] - ETA: 46:17 - loss: 0.3628 - accuracy: 0.9433 - mean_iou: 0.6161 Epoch 00007: accuracy did not improve from 0.95261 22/78 [=======>......................] - ETA: 45:33 - loss: 0.3617 - accuracy: 0.9437 - mean_iou: 0.6171 Epoch 00007: accuracy did not improve from 0.95261 23/78 [=======>......................] - ETA: 44:49 - loss: 0.3612 - accuracy: 0.9439 - mean_iou: 0.6169 Epoch 00007: accuracy did not improve from 0.95261 24/78 [========>.....................] - ETA: 44:05 - loss: 0.3635 - accuracy: 0.9441 - mean_iou: 0.6142 Epoch 00007: accuracy did not improve from 0.95261 25/78 [========>.....................] - ETA: 43:20 - loss: 0.3656 - accuracy: 0.9437 - mean_iou: 0.6108 Epoch 00007: accuracy did not improve from 0.95261 26/78 [=========>....................] - ETA: 42:35 - loss: 0.3662 - accuracy: 0.9434 - mean_iou: 0.6092 Epoch 00007: accuracy did not improve from 0.95261 27/78 [=========>....................] - ETA: 41:49 - loss: 0.3644 - accuracy: 0.9438 - mean_iou: 0.6150 Epoch 00007: accuracy did not improve from 0.95261 28/78 [=========>....................] - ETA: 41:02 - loss: 0.3651 - accuracy: 0.9442 - mean_iou: 0.6139 Epoch 00007: accuracy did not improve from 0.95261 29/78 [==========>...................] - ETA: 40:16 - loss: 0.3641 - accuracy: 0.9444 - mean_iou: 0.6135 Epoch 00007: accuracy did not improve from 0.95261 30/78 [==========>...................] - ETA: 39:28 - loss: 0.3644 - accuracy: 0.9443 - mean_iou: 0.6125 Epoch 00007: accuracy did not improve from 0.95261 31/78 [==========>...................] - ETA: 38:41 - loss: 0.3647 - accuracy: 0.9442 - mean_iou: 0.6131 Epoch 00007: accuracy did not improve from 0.95261 32/78 [===========>..................] - ETA: 37:53 - loss: 0.3652 - accuracy: 0.9442 - mean_iou: 0.6113 Epoch 00007: accuracy did not improve from 0.95261 33/78 [===========>..................] - ETA: 37:05 - loss: 0.3655 - accuracy: 0.9438 - mean_iou: 0.6134 Epoch 00007: accuracy did not improve from 0.95261 34/78 [============>.................] - ETA: 36:17 - loss: 0.3654 - accuracy: 0.9433 - mean_iou: 0.6123 Epoch 00007: accuracy did not improve from 0.95261 35/78 [============>.................] - ETA: 35:29 - loss: 0.3656 - accuracy: 0.9433 - mean_iou: 0.6099 Epoch 00007: accuracy did not improve from 0.95261 36/78 [============>.................] - ETA: 34:41 - loss: 0.3669 - accuracy: 0.9428 - mean_iou: 0.6060 Epoch 00007: accuracy did not improve from 0.95261 37/78 [=============>................] - ETA: 33:53 - loss: 0.3677 - accuracy: 0.9427 - mean_iou: 0.6058 Epoch 00007: accuracy did not improve from 0.95261 38/78 [=============>................] - ETA: 33:04 - loss: 0.3673 - accuracy: 0.9426 - mean_iou: 0.6065 Epoch 00007: accuracy did not improve from 0.95261 39/78 [==============>...............] - ETA: 32:16 - loss: 0.3672 - accuracy: 0.9427 - mean_iou: 0.6050 Epoch 00007: accuracy did not improve from 0.95261 40/78 [==============>...............] - ETA: 31:27 - loss: 0.3679 - accuracy: 0.9426 - mean_iou: 0.6040 Epoch 00007: accuracy did not improve from 0.95261 41/78 [==============>...............] - ETA: 30:38 - loss: 0.3676 - accuracy: 0.9424 - mean_iou: 0.6026 Epoch 00007: accuracy did not improve from 0.95261 42/78 [===============>..............] - ETA: 29:49 - loss: 0.3669 - accuracy: 0.9423 - mean_iou: 0.6026 Epoch 00007: accuracy did not improve from 0.95261 43/78 [===============>..............] - ETA: 29:00 - loss: 0.3671 - accuracy: 0.9423 - mean_iou: 0.6024 Epoch 00007: accuracy did not improve from 0.95261 44/78 [===============>..............] - ETA: 28:11 - loss: 0.3662 - accuracy: 0.9424 - mean_iou: 0.6035 Epoch 00007: accuracy did not improve from 0.95261 45/78 [================>.............] - ETA: 27:22 - loss: 0.3664 - accuracy: 0.9422 - mean_iou: 0.6026 Epoch 00007: accuracy did not improve from 0.95261 46/78 [================>.............] - ETA: 26:33 - loss: 0.3664 - accuracy: 0.9422 - mean_iou: 0.6032 Epoch 00007: accuracy did not improve from 0.95261 47/78 [=================>............] - ETA: 25:44 - loss: 0.3657 - accuracy: 0.9423 - mean_iou: 0.6060 Epoch 00007: accuracy did not improve from 0.95261 48/78 [=================>............] - ETA: 24:55 - loss: 0.3653 - accuracy: 0.9425 - mean_iou: 0.6057 Epoch 00007: accuracy did not improve from 0.95261 49/78 [=================>............] - ETA: 24:05 - loss: 0.3646 - accuracy: 0.9427 - mean_iou: 0.6068 Epoch 00007: accuracy did not improve from 0.95261 50/78 [==================>...........] - ETA: 23:16 - loss: 0.3632 - accuracy: 0.9431 - mean_iou: 0.6086 Epoch 00007: accuracy did not improve from 0.95261 51/78 [==================>...........] - ETA: 22:26 - loss: 0.3625 - accuracy: 0.9432 - mean_iou: 0.6087 Epoch 00007: accuracy did not improve from 0.95261 52/78 [===================>..........] - ETA: 21:37 - loss: 0.3621 - accuracy: 0.9432 - mean_iou: 0.6079 Epoch 00007: accuracy did not improve from 0.95261 53/78 [===================>..........] - ETA: 20:47 - loss: 0.3623 - accuracy: 0.9432 - mean_iou: 0.6064 Epoch 00007: accuracy did not improve from 0.95261 54/78 [===================>..........] - ETA: 19:58 - loss: 0.3624 - accuracy: 0.9431 - mean_iou: 0.6056 Epoch 00007: accuracy did not improve from 0.95261 55/78 [====================>.........] - ETA: 19:08 - loss: 0.3625 - accuracy: 0.9432 - mean_iou: 0.6051 Epoch 00007: accuracy did not improve from 0.95261 56/78 [====================>.........] - ETA: 18:18 - loss: 0.3650 - accuracy: 0.9426 - mean_iou: 0.6042 Epoch 00007: accuracy did not improve from 0.95261 57/78 [====================>.........] - ETA: 17:29 - loss: 0.3649 - accuracy: 0.9426 - mean_iou: 0.6053 Epoch 00007: accuracy did not improve from 0.95261 58/78 [=====================>........] - ETA: 16:39 - loss: 0.3651 - accuracy: 0.9426 - mean_iou: 0.6047 Epoch 00007: accuracy did not improve from 0.95261 59/78 [=====================>........] - ETA: 15:49 - loss: 0.3665 - accuracy: 0.9425 - mean_iou: 0.6039 Epoch 00007: accuracy did not improve from 0.95261 60/78 [======================>.......] - ETA: 14:59 - loss: 0.3669 - accuracy: 0.9424 - mean_iou: 0.6022 Epoch 00007: accuracy did not improve from 0.95261 61/78 [======================>.......] - ETA: 14:10 - loss: 0.3673 - accuracy: 0.9424 - mean_iou: 0.6006 Epoch 00007: accuracy did not improve from 0.95261 62/78 [======================>.......] - ETA: 13:20 - loss: 0.3675 - accuracy: 0.9423 - mean_iou: 0.5991 Epoch 00007: accuracy did not improve from 0.95261 63/78 [=======================>......] - ETA: 12:30 - loss: 0.3678 - accuracy: 0.9422 - mean_iou: 0.5998 Epoch 00007: accuracy did not improve from 0.95261 64/78 [=======================>......] - ETA: 11:40 - loss: 0.3672 - accuracy: 0.9422 - mean_iou: 0.6013 Epoch 00007: accuracy did not improve from 0.95261 65/78 [========================>.....] - ETA: 10:51 - loss: 0.3663 - accuracy: 0.9422 - mean_iou: 0.6029 Epoch 00007: accuracy did not improve from 0.95261 66/78 [========================>.....] - ETA: 10:01 - loss: 0.3658 - accuracy: 0.9424 - mean_iou: 0.6038 Epoch 00007: accuracy did not improve from 0.95261 67/78 [========================>.....] - ETA: 9:11 - loss: 0.3655 - accuracy: 0.9425 - mean_iou: 0.6052 Epoch 00007: accuracy did not improve from 0.95261 68/78 [=========================>....] - ETA: 8:20 - loss: 0.3659 - accuracy: 0.9424 - mean_iou: 0.6044 Epoch 00007: accuracy did not improve from 0.95261 69/78 [=========================>....] - ETA: 7:29 - loss: 0.3662 - accuracy: 0.9424 - mean_iou: 0.6036 Epoch 00007: accuracy did not improve from 0.95261 70/78 [=========================>....] - ETA: 6:39 - loss: 0.3661 - accuracy: 0.9425 - mean_iou: 0.6047 Epoch 00007: accuracy did not improve from 0.95261 71/78 [==========================>...] - ETA: 5:49 - loss: 0.3656 - accuracy: 0.9427 - mean_iou: 0.6064 Epoch 00007: accuracy did not improve from 0.95261 72/78 [==========================>...] - ETA: 4:59 - loss: 0.3658 - accuracy: 0.9425 - mean_iou: 0.6066 Epoch 00007: accuracy did not improve from 0.95261 73/78 [===========================>..] - ETA: 4:08 - loss: 0.3664 - accuracy: 0.9425 - mean_iou: 0.6051 Epoch 00007: accuracy did not improve from 0.95261 74/78 [===========================>..] - ETA: 3:19 - loss: 0.3660 - accuracy: 0.9426 - mean_iou: 0.6057 Epoch 00007: accuracy did not improve from 0.95261 75/78 [===========================>..] - ETA: 2:29 - loss: 0.3662 - accuracy: 0.9427 - mean_iou: 0.6050 Epoch 00007: accuracy did not improve from 0.95261 76/78 [============================>.] - ETA: 1:39 - loss: 0.3667 - accuracy: 0.9428 - mean_iou: 0.6044 Epoch 00007: accuracy did not improve from 0.95261 77/78 [============================>.] - ETA: 49s - loss: 0.3665 - accuracy: 0.9428 - mean_iou: 0.6046 WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. Epoch 00007: accuracy did not improve from 0.95261 78/78 [==============================] - ETA: 0s - loss: 0.3665 - accuracy: 0.9427 - mean_iou: 0.6042 WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. 78/78 [==============================] - 4159s 53s/step - loss: 0.3665 - accuracy: 0.9427 - mean_iou: 0.6042 - val_loss: 0.3991 - val_accuracy: 0.9474 - val_mean_iou: 0.5798 - lr: 8.6448e-04 Epoch 8/8 WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. Epoch 00008: accuracy did not improve from 0.95261 1/78 [..............................] - ETA: 0s - loss: 0.3226 - accuracy: 0.9474 - mean_iou: 0.7384 Epoch 00008: accuracy did not improve from 0.95261 2/78 [..............................] - ETA: 32:11 - loss: 0.3244 - accuracy: 0.9507 - mean_iou: 0.7047 Epoch 00008: accuracy did not improve from 0.95261 3/78 [>.............................] - ETA: 42:13 - loss: 0.3402 - accuracy: 0.9468 - mean_iou: 0.6832 Epoch 00008: accuracy did not improve from 0.95261 4/78 [>.............................] - ETA: 46:52 - loss: 0.3572 - accuracy: 0.9452 - mean_iou: 0.6669 Epoch 00008: accuracy did not improve from 0.95261 5/78 [>.............................] - ETA: 49:18 - loss: 0.3509 - accuracy: 0.9465 - mean_iou: 0.6540 Epoch 00008: accuracy did not improve from 0.95261 6/78 [=>............................] - ETA: 50:38 - loss: 0.3512 - accuracy: 0.9462 - mean_iou: 0.6470 Epoch 00008: accuracy did not improve from 0.95261 7/78 [=>............................] - ETA: 51:20 - loss: 0.3490 - accuracy: 0.9458 - mean_iou: 0.6486 Epoch 00008: accuracy did not improve from 0.95261 8/78 [==>...........................] - ETA: 51:37 - loss: 0.3478 - accuracy: 0.9461 - mean_iou: 0.6522 Epoch 00008: accuracy did not improve from 0.95261 9/78 [==>...........................] - ETA: 51:42 - loss: 0.3521 - accuracy: 0.9438 - mean_iou: 0.6419 Epoch 00008: accuracy did not improve from 0.95261 10/78 [==>...........................] - ETA: 51:35 - loss: 0.3531 - accuracy: 0.9437 - mean_iou: 0.6397 Epoch 00008: accuracy did not improve from 0.95261 11/78 [===>..........................] - ETA: 51:19 - loss: 0.3518 - accuracy: 0.9438 - mean_iou: 0.6468 Epoch 00008: accuracy did not improve from 0.95261 12/78 [===>..........................] - ETA: 51:01 - loss: 0.3575 - accuracy: 0.9430 - mean_iou: 0.6415 Epoch 00008: accuracy did not improve from 0.95261 13/78 [====>.........................] - ETA: 50:33 - loss: 0.3556 - accuracy: 0.9443 - mean_iou: 0.6375 Epoch 00008: accuracy did not improve from 0.95261 14/78 [====>.........................] - ETA: 50:03 - loss: 0.3568 - accuracy: 0.9453 - mean_iou: 0.6431 Epoch 00008: accuracy did not improve from 0.95261 15/78 [====>.........................] - ETA: 49:30 - loss: 0.3551 - accuracy: 0.9460 - mean_iou: 0.6434 Epoch 00008: accuracy did not improve from 0.95261 16/78 [=====>........................] - ETA: 48:56 - loss: 0.3562 - accuracy: 0.9457 - mean_iou: 0.6418 Epoch 00008: accuracy did not improve from 0.95261 17/78 [=====>........................] - ETA: 48:19 - loss: 0.3571 - accuracy: 0.9462 - mean_iou: 0.6422 Epoch 00008: accuracy did not improve from 0.95261 18/78 [=====>........................] - ETA: 47:42 - loss: 0.3602 - accuracy: 0.9454 - mean_iou: 0.6371 Epoch 00008: accuracy did not improve from 0.95261 19/78 [======>.......................] - ETA: 47:03 - loss: 0.3612 - accuracy: 0.9454 - mean_iou: 0.6280 Epoch 00008: accuracy did not improve from 0.95261 20/78 [======>.......................] - ETA: 46:23 - loss: 0.3609 - accuracy: 0.9454 - mean_iou: 0.6299 Epoch 00008: accuracy did not improve from 0.95261 21/78 [=======>......................] - ETA: 45:42 - loss: 0.3610 - accuracy: 0.9454 - mean_iou: 0.6303 Epoch 00008: accuracy did not improve from 0.95261 22/78 [=======>......................] - ETA: 44:59 - loss: 0.3629 - accuracy: 0.9441 - mean_iou: 0.6273 Epoch 00008: accuracy did not improve from 0.95261 23/78 [=======>......................] - ETA: 44:16 - loss: 0.3607 - accuracy: 0.9445 - mean_iou: 0.6292 Epoch 00008: accuracy did not improve from 0.95261 24/78 [========>.....................] - ETA: 43:32 - loss: 0.3596 - accuracy: 0.9444 - mean_iou: 0.6279 Epoch 00008: accuracy did not improve from 0.95261 25/78 [========>.....................] - ETA: 42:47 - loss: 0.3587 - accuracy: 0.9444 - mean_iou: 0.6257 Epoch 00008: accuracy did not improve from 0.95261 26/78 [=========>....................] - ETA: 42:02 - loss: 0.3594 - accuracy: 0.9439 - mean_iou: 0.6232 Epoch 00008: accuracy did not improve from 0.95261 27/78 [=========>....................] - ETA: 41:17 - loss: 0.3606 - accuracy: 0.9440 - mean_iou: 0.6245 Epoch 00008: accuracy did not improve from 0.95261 28/78 [=========>....................] - ETA: 40:32 - loss: 0.3599 - accuracy: 0.9442 - mean_iou: 0.6264 Epoch 00008: accuracy did not improve from 0.95261 29/78 [==========>...................] - ETA: 39:46 - loss: 0.3622 - accuracy: 0.9437 - mean_iou: 0.6196 Epoch 00008: accuracy did not improve from 0.95261 30/78 [==========>...................] - ETA: 39:00 - loss: 0.3636 - accuracy: 0.9433 - mean_iou: 0.6193 Epoch 00008: accuracy did not improve from 0.95261 31/78 [==========>...................] - ETA: 38:14 - loss: 0.3637 - accuracy: 0.9431 - mean_iou: 0.6187 Epoch 00008: accuracy did not improve from 0.95261 32/78 [===========>..................] - ETA: 37:27 - loss: 0.3640 - accuracy: 0.9430 - mean_iou: 0.6200 Epoch 00008: accuracy did not improve from 0.95261 33/78 [===========>..................] - ETA: 36:41 - loss: 0.3633 - accuracy: 0.9432 - mean_iou: 0.6185 Epoch 00008: accuracy did not improve from 0.95261 34/78 [============>.................] - ETA: 35:54 - loss: 0.3624 - accuracy: 0.9435 - mean_iou: 0.6176 Epoch 00008: accuracy did not improve from 0.95261 35/78 [============>.................] - ETA: 35:06 - loss: 0.3623 - accuracy: 0.9435 - mean_iou: 0.6169 Epoch 00008: accuracy did not improve from 0.95261 36/78 [============>.................] - ETA: 34:19 - loss: 0.3626 - accuracy: 0.9434 - mean_iou: 0.6157 Epoch 00008: accuracy did not improve from 0.95261 37/78 [=============>................] - ETA: 33:31 - loss: 0.3630 - accuracy: 0.9433 - mean_iou: 0.6158 Epoch 00008: accuracy did not improve from 0.95261 38/78 [=============>................] - ETA: 32:44 - loss: 0.3633 - accuracy: 0.9434 - mean_iou: 0.6118 Epoch 00008: accuracy did not improve from 0.95261 39/78 [==============>...............] - ETA: 31:55 - loss: 0.3631 - accuracy: 0.9433 - mean_iou: 0.6135 Epoch 00008: accuracy did not improve from 0.95261 40/78 [==============>...............] - ETA: 31:08 - loss: 0.3619 - accuracy: 0.9436 - mean_iou: 0.6155 Epoch 00008: accuracy did not improve from 0.95261 41/78 [==============>...............] - ETA: 30:20 - loss: 0.3610 - accuracy: 0.9438 - mean_iou: 0.6162 Epoch 00008: accuracy did not improve from 0.95261 42/78 [===============>..............] - ETA: 29:32 - loss: 0.3618 - accuracy: 0.9436 - mean_iou: 0.6169 Epoch 00008: accuracy did not improve from 0.95261 43/78 [===============>..............] - ETA: 28:44 - loss: 0.3620 - accuracy: 0.9436 - mean_iou: 0.6167 Epoch 00008: accuracy did not improve from 0.95261 44/78 [===============>..............] - ETA: 27:56 - loss: 0.3614 - accuracy: 0.9438 - mean_iou: 0.6174 Epoch 00008: accuracy did not improve from 0.95261 45/78 [================>.............] - ETA: 27:08 - loss: 0.3610 - accuracy: 0.9440 - mean_iou: 0.6180 Epoch 00008: accuracy did not improve from 0.95261 46/78 [================>.............] - ETA: 26:19 - loss: 0.3604 - accuracy: 0.9443 - mean_iou: 0.6179 Epoch 00008: accuracy did not improve from 0.95261 47/78 [=================>............] - ETA: 25:31 - loss: 0.3606 - accuracy: 0.9443 - mean_iou: 0.6157 Epoch 00008: accuracy did not improve from 0.95261 48/78 [=================>............] - ETA: 24:43 - loss: 0.3607 - accuracy: 0.9441 - mean_iou: 0.6161 Epoch 00008: accuracy did not improve from 0.95261 49/78 [=================>............] - ETA: 23:54 - loss: 0.3610 - accuracy: 0.9441 - mean_iou: 0.6156 Epoch 00008: accuracy did not improve from 0.95261 50/78 [==================>...........] - ETA: 23:05 - loss: 0.3615 - accuracy: 0.9440 - mean_iou: 0.6163 Epoch 00008: accuracy did not improve from 0.95261 51/78 [==================>...........] - ETA: 22:17 - loss: 0.3610 - accuracy: 0.9441 - mean_iou: 0.6155 Epoch 00008: accuracy did not improve from 0.95261 52/78 [===================>..........] - ETA: 21:28 - loss: 0.3613 - accuracy: 0.9440 - mean_iou: 0.6146 Epoch 00008: accuracy did not improve from 0.95261 53/78 [===================>..........] - ETA: 20:39 - loss: 0.3619 - accuracy: 0.9437 - mean_iou: 0.6125 Epoch 00008: accuracy did not improve from 0.95261 54/78 [===================>..........] - ETA: 19:50 - loss: 0.3608 - accuracy: 0.9440 - mean_iou: 0.6131 Epoch 00008: accuracy did not improve from 0.95261 55/78 [====================>.........] - ETA: 19:01 - loss: 0.3607 - accuracy: 0.9439 - mean_iou: 0.6112 Epoch 00008: accuracy did not improve from 0.95261 56/78 [====================>.........] - ETA: 18:11 - loss: 0.3611 - accuracy: 0.9434 - mean_iou: 0.6101 Epoch 00008: accuracy did not improve from 0.95261 57/78 [====================>.........] - ETA: 17:22 - loss: 0.3615 - accuracy: 0.9433 - mean_iou: 0.6093 Epoch 00008: accuracy did not improve from 0.95261 58/78 [=====================>........] - ETA: 16:33 - loss: 0.3615 - accuracy: 0.9432 - mean_iou: 0.6079 Epoch 00008: accuracy did not improve from 0.95261 59/78 [=====================>........] - ETA: 15:43 - loss: 0.3616 - accuracy: 0.9430 - mean_iou: 0.6079 Epoch 00008: accuracy did not improve from 0.95261 60/78 [======================>.......] - ETA: 14:54 - loss: 0.3610 - accuracy: 0.9430 - mean_iou: 0.6075 Epoch 00008: accuracy did not improve from 0.95261 61/78 [======================>.......] - ETA: 14:05 - loss: 0.3624 - accuracy: 0.9428 - mean_iou: 0.6058 Epoch 00008: accuracy did not improve from 0.95261 62/78 [======================>.......] - ETA: 13:15 - loss: 0.3624 - accuracy: 0.9427 - mean_iou: 0.6066 Epoch 00008: accuracy did not improve from 0.95261 63/78 [=======================>......] - ETA: 12:25 - loss: 0.3626 - accuracy: 0.9429 - mean_iou: 0.6065 Epoch 00008: accuracy did not improve from 0.95261 64/78 [=======================>......] - ETA: 11:36 - loss: 0.3620 - accuracy: 0.9430 - mean_iou: 0.6066 Epoch 00008: accuracy did not improve from 0.95261 65/78 [========================>.....] - ETA: 10:46 - loss: 0.3611 - accuracy: 0.9432 - mean_iou: 0.6072 Epoch 00008: accuracy did not improve from 0.95261 66/78 [========================>.....] - ETA: 9:57 - loss: 0.3610 - accuracy: 0.9432 - mean_iou: 0.6074 Epoch 00008: accuracy did not improve from 0.95261 67/78 [========================>.....] - ETA: 9:07 - loss: 0.3606 - accuracy: 0.9435 - mean_iou: 0.6090 Epoch 00008: accuracy did not improve from 0.95261 68/78 [=========================>....] - ETA: 8:17 - loss: 0.3599 - accuracy: 0.9437 - mean_iou: 0.6101 Epoch 00008: accuracy did not improve from 0.95261 69/78 [=========================>....] - ETA: 7:27 - loss: 0.3589 - accuracy: 0.9440 - mean_iou: 0.6106 Epoch 00008: accuracy did not improve from 0.95261 70/78 [=========================>....] - ETA: 6:37 - loss: 0.3594 - accuracy: 0.9441 - mean_iou: 0.6110 Epoch 00008: accuracy did not improve from 0.95261 71/78 [==========================>...] - ETA: 5:47 - loss: 0.3594 - accuracy: 0.9440 - mean_iou: 0.6112 Epoch 00008: accuracy did not improve from 0.95261 72/78 [==========================>...] - ETA: 4:57 - loss: 0.3600 - accuracy: 0.9438 - mean_iou: 0.6100 Epoch 00008: accuracy did not improve from 0.95261 73/78 [===========================>..] - ETA: 4:07 - loss: 0.3595 - accuracy: 0.9439 - mean_iou: 0.6108 Epoch 00008: accuracy did not improve from 0.95261 74/78 [===========================>..] - ETA: 3:18 - loss: 0.3594 - accuracy: 0.9439 - mean_iou: 0.6107 Epoch 00008: accuracy did not improve from 0.95261 75/78 [===========================>..] - ETA: 2:28 - loss: 0.3593 - accuracy: 0.9439 - mean_iou: 0.6113 Epoch 00008: accuracy did not improve from 0.95261 76/78 [============================>.] - ETA: 1:38 - loss: 0.3607 - accuracy: 0.9437 - mean_iou: 0.6097 Epoch 00008: accuracy did not improve from 0.95261 77/78 [============================>.] - ETA: 49s - loss: 0.3599 - accuracy: 0.9438 - mean_iou: 0.6102 WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. Epoch 00008: accuracy did not improve from 0.95261 78/78 [==============================] - ETA: 0s - loss: 0.3597 - accuracy: 0.9438 - mean_iou: 0.6100 WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. WARNING:tensorflow:multiprocessing can interact badly with TensorFlow, causing nondeterministic deadlocks. For high performance data pipelines tf.data is recommended. 78/78 [==============================] - 4147s 53s/step - loss: 0.3597 - accuracy: 0.9438 - mean_iou: 0.6100 - val_loss: 0.3901 - val_accuracy: 0.9344 - val_mean_iou: 0.4983 - lr: 8.1871e-04
model_json = model.to_json()
with open("model4000_8.json", "w") as json_file:
json_file.write(model_json)
model.save_weights("model4000_8.h5")
plt.figure(figsize=(12,4))
plt.subplot(131)
plt.plot(history.epoch, history.history["loss"], label="Train loss")
plt.plot(history.epoch, history.history["val_loss"], label="Valid loss")
plt.legend()
plt.subplot(132)
plt.plot(history.epoch, history.history["accuracy"], label="Train accuracy")
plt.plot(history.epoch, history.history["val_accuracy"], label="Valid accuracy")
plt.legend()
plt.subplot(133)
plt.plot(history.epoch, history.history["mean_iou"], label="Train iou")
plt.plot(history.epoch, history.history["val_mean_iou"], label="Valid iou")
plt.legend()
plt.show()
Above training we have done for additional 2 epochs.
for imgs, msks in valid_gen:
# predict batch of images
preds = model.predict(imgs)
# create figure
f, axarr = plt.subplots(4, 8, figsize=(20,15))
axarr = axarr.ravel()
axidx = 0
# loop through batch
for img, msk, pred in zip(imgs, msks, preds):
# plot image
axarr[axidx].imshow(img[:, :, 0])
# threshold true mask
comp = msk[:, :, 0] > 0.5
# apply connected components
comp = measure.label(comp)
# apply bounding boxes
predictionString = ''
for region in measure.regionprops(comp):
# retrieve x, y, height and width
y, x, y2, x2 = region.bbox
height = y2 - y
width = x2 - x
axarr[axidx].add_patch(patches.Rectangle((x,y),width,height,linewidth=2,edgecolor='b',facecolor='none'))
# threshold predicted mask
comp = pred[:, :, 0] > 0.5
# apply connected components
comp = measure.label(comp)
# apply bounding boxes
predictionString = ''
for region in measure.regionprops(comp):
# retrieve x, y, height and width
y, x, y2, x2 = region.bbox
height = y2 - y
width = x2 - x
conf = np.mean(pred[y:y+height, x:x+width])
if conf>0.3:
axarr[axidx].add_patch(patches.Rectangle((x,y),width,height,linewidth=2,edgecolor='r',facecolor='none'))
axidx += 1
plt.show()
# only plot one batch
break
folder = 'stage_2_test_images'
test_filenames = os.listdir(folder)
# create test generator with predict flag set to True
test_gen = generator(folder, test_filenames, None, batch_size=2, image_size=256, shuffle=False, predict=True)
print('n test samples:', len(test_filenames))
f, axarr = plt.subplots(2, 5, figsize=(20,10))
axarr = axarr.ravel()
axidx = 0
# create submission dictionary
submission_dict = {}
# loop through testset
for imgs, filenames in test_gen:
# predict batch of images
preds = model.predict(imgs)
#axarr[axidx].imshow(resize(img[:, :, 0],(1024, 1024),mode='reflect'))
axarr[axidx].imshow(img[:, :, 0])
# loop through batch
for pred, filename in zip(preds, filenames):
# resize predicted mask
#pred = resize(pred, (1024, 1024), mode='reflect')
# threshold predicted mask
comp = pred[:, :, 0] > 0.5
# apply connected components
comp = measure.label(comp)
# apply bounding boxes
predictionString = ''
for region in measure.regionprops(comp):
# retrieve x, y, height and width
y, x, y2, x2 = region.bbox
height = y2 - y
width = x2 - x
# proxy for confidence score
conf = np.mean(pred[y:y+height, x:x+width])
# add to predictionString
if conf>0.8:
predictionString += str(conf) + ' ' + str(x) + ' ' + str(y) + ' ' + str(width) + ' ' + str(height) + ' '
axarr[axidx].add_patch(patches.Rectangle((x,y),width,height,linewidth=2,edgecolor='r',facecolor='none'))
# add filename and predictionString to dictionary
filename = filename.split('.')[0]
submission_dict[filename] = predictionString
#print("--------------------------------------------------------------------------")
#print(axidx)
#print(predictionString)
axidx += 1
if axidx >= 10: #len(test_filenames)
break
plt.show()
n test samples: 3000
DenseNet falls in the category of classic networks. DenseNet is quite similar to ResNet with some fundamental differences. ResNet uses an additive method (+) that merges the previous layer (identity) with the future layer, whereas DenseNet concatenates (.) the output of the previous layer with the future layer.
from keras.layers import *
from keras.applications.densenet import DenseNet121
from keras.models import Model
base_model=DenseNet121(include_top=False, weights= None, input_shape=(256,256,1))
base_model.trainable = True
print(base_model.input)
x = Dense(1, activation='sigmoid')(base_model.output)
x = UpSampling2D(32)(x)
print(x)
transfer_model = Model(base_model.input, x)
print(x)
transfer_model.compile(optimizer = 'adam',
loss=iou_bce_loss,
metrics=['accuracy'])
Tensor("input_4:0", shape=(None, 256, 256, 1), dtype=float32)
Tensor("up_sampling2d_3/ResizeNearestNeighbor:0", shape=(None, 256, 256, 1), dtype=float32)
Tensor("up_sampling2d_3/ResizeNearestNeighbor:0", shape=(None, 256, 256, 1), dtype=float32)
history1 = transfer_model.fit_generator(train_gen, validation_data=valid_gen, callbacks=[learning_rate], epochs=5)
Epoch 1/5 75/75 [==============================] - 2233s 30s/step - loss: 0.5179 - accuracy: 0.8895 - val_loss: 0.9258 - val_accuracy: 0.4323 Epoch 2/5 75/75 [==============================] - 2156s 29s/step - loss: 0.4605 - accuracy: 0.9160 - val_loss: 0.7090 - val_accuracy: 0.7760 Epoch 3/5 75/75 [==============================] - 2200s 29s/step - loss: 0.4484 - accuracy: 0.9195 - val_loss: 0.7891 - val_accuracy: 0.6217 Epoch 4/5 75/75 [==============================] - 2193s 29s/step - loss: 0.4459 - accuracy: 0.9200 - val_loss: 0.6942 - val_accuracy: 0.7713 Epoch 5/5 75/75 [==============================] - 2266s 30s/step - loss: 0.4376 - accuracy: 0.9207 - val_loss: 0.4804 - val_accuracy: 0.9167
plt.figure(figsize=(12,4))
plt.subplot(131)
plt.plot(history1.epoch, history1.history["loss"], label="Train loss")
plt.plot(history1.epoch, history1.history["val_loss"], label="Valid loss")
plt.legend()
plt.subplot(132)
plt.plot(history1.epoch, history1.history["accuracy"], label="Train accuracy")
plt.plot(history1.epoch, history1.history["val_accuracy"], label="Valid accuracy")
plt.legend()
plt.show()
transfer_model.summary()
Model: "model_2"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_4 (InputLayer) (None, 256, 256, 1) 0
__________________________________________________________________________________________________
zero_padding2d_5 (ZeroPadding2D (None, 262, 262, 1) 0 input_4[0][0]
__________________________________________________________________________________________________
conv1/conv (Conv2D) (None, 128, 128, 64) 3136 zero_padding2d_5[0][0]
__________________________________________________________________________________________________
conv1/bn (BatchNormalization) (None, 128, 128, 64) 256 conv1/conv[0][0]
__________________________________________________________________________________________________
conv1/relu (Activation) (None, 128, 128, 64) 0 conv1/bn[0][0]
__________________________________________________________________________________________________
zero_padding2d_6 (ZeroPadding2D (None, 130, 130, 64) 0 conv1/relu[0][0]
__________________________________________________________________________________________________
pool1 (MaxPooling2D) (None, 64, 64, 64) 0 zero_padding2d_6[0][0]
__________________________________________________________________________________________________
conv2_block1_0_bn (BatchNormali (None, 64, 64, 64) 256 pool1[0][0]
__________________________________________________________________________________________________
conv2_block1_0_relu (Activation (None, 64, 64, 64) 0 conv2_block1_0_bn[0][0]
__________________________________________________________________________________________________
conv2_block1_1_conv (Conv2D) (None, 64, 64, 128) 8192 conv2_block1_0_relu[0][0]
__________________________________________________________________________________________________
conv2_block1_1_bn (BatchNormali (None, 64, 64, 128) 512 conv2_block1_1_conv[0][0]
__________________________________________________________________________________________________
conv2_block1_1_relu (Activation (None, 64, 64, 128) 0 conv2_block1_1_bn[0][0]
__________________________________________________________________________________________________
conv2_block1_2_conv (Conv2D) (None, 64, 64, 32) 36864 conv2_block1_1_relu[0][0]
__________________________________________________________________________________________________
conv2_block1_concat (Concatenat (None, 64, 64, 96) 0 pool1[0][0]
conv2_block1_2_conv[0][0]
__________________________________________________________________________________________________
conv2_block2_0_bn (BatchNormali (None, 64, 64, 96) 384 conv2_block1_concat[0][0]
__________________________________________________________________________________________________
conv2_block2_0_relu (Activation (None, 64, 64, 96) 0 conv2_block2_0_bn[0][0]
__________________________________________________________________________________________________
conv2_block2_1_conv (Conv2D) (None, 64, 64, 128) 12288 conv2_block2_0_relu[0][0]
__________________________________________________________________________________________________
conv2_block2_1_bn (BatchNormali (None, 64, 64, 128) 512 conv2_block2_1_conv[0][0]
__________________________________________________________________________________________________
conv2_block2_1_relu (Activation (None, 64, 64, 128) 0 conv2_block2_1_bn[0][0]
__________________________________________________________________________________________________
conv2_block2_2_conv (Conv2D) (None, 64, 64, 32) 36864 conv2_block2_1_relu[0][0]
__________________________________________________________________________________________________
conv2_block2_concat (Concatenat (None, 64, 64, 128) 0 conv2_block1_concat[0][0]
conv2_block2_2_conv[0][0]
__________________________________________________________________________________________________
conv2_block3_0_bn (BatchNormali (None, 64, 64, 128) 512 conv2_block2_concat[0][0]
__________________________________________________________________________________________________
conv2_block3_0_relu (Activation (None, 64, 64, 128) 0 conv2_block3_0_bn[0][0]
__________________________________________________________________________________________________
conv2_block3_1_conv (Conv2D) (None, 64, 64, 128) 16384 conv2_block3_0_relu[0][0]
__________________________________________________________________________________________________
conv2_block3_1_bn (BatchNormali (None, 64, 64, 128) 512 conv2_block3_1_conv[0][0]
__________________________________________________________________________________________________
conv2_block3_1_relu (Activation (None, 64, 64, 128) 0 conv2_block3_1_bn[0][0]
__________________________________________________________________________________________________
conv2_block3_2_conv (Conv2D) (None, 64, 64, 32) 36864 conv2_block3_1_relu[0][0]
__________________________________________________________________________________________________
conv2_block3_concat (Concatenat (None, 64, 64, 160) 0 conv2_block2_concat[0][0]
conv2_block3_2_conv[0][0]
__________________________________________________________________________________________________
conv2_block4_0_bn (BatchNormali (None, 64, 64, 160) 640 conv2_block3_concat[0][0]
__________________________________________________________________________________________________
conv2_block4_0_relu (Activation (None, 64, 64, 160) 0 conv2_block4_0_bn[0][0]
__________________________________________________________________________________________________
conv2_block4_1_conv (Conv2D) (None, 64, 64, 128) 20480 conv2_block4_0_relu[0][0]
__________________________________________________________________________________________________
conv2_block4_1_bn (BatchNormali (None, 64, 64, 128) 512 conv2_block4_1_conv[0][0]
__________________________________________________________________________________________________
conv2_block4_1_relu (Activation (None, 64, 64, 128) 0 conv2_block4_1_bn[0][0]
__________________________________________________________________________________________________
conv2_block4_2_conv (Conv2D) (None, 64, 64, 32) 36864 conv2_block4_1_relu[0][0]
__________________________________________________________________________________________________
conv2_block4_concat (Concatenat (None, 64, 64, 192) 0 conv2_block3_concat[0][0]
conv2_block4_2_conv[0][0]
__________________________________________________________________________________________________
conv2_block5_0_bn (BatchNormali (None, 64, 64, 192) 768 conv2_block4_concat[0][0]
__________________________________________________________________________________________________
conv2_block5_0_relu (Activation (None, 64, 64, 192) 0 conv2_block5_0_bn[0][0]
__________________________________________________________________________________________________
conv2_block5_1_conv (Conv2D) (None, 64, 64, 128) 24576 conv2_block5_0_relu[0][0]
__________________________________________________________________________________________________
conv2_block5_1_bn (BatchNormali (None, 64, 64, 128) 512 conv2_block5_1_conv[0][0]
__________________________________________________________________________________________________
conv2_block5_1_relu (Activation (None, 64, 64, 128) 0 conv2_block5_1_bn[0][0]
__________________________________________________________________________________________________
conv2_block5_2_conv (Conv2D) (None, 64, 64, 32) 36864 conv2_block5_1_relu[0][0]
__________________________________________________________________________________________________
conv2_block5_concat (Concatenat (None, 64, 64, 224) 0 conv2_block4_concat[0][0]
conv2_block5_2_conv[0][0]
__________________________________________________________________________________________________
conv2_block6_0_bn (BatchNormali (None, 64, 64, 224) 896 conv2_block5_concat[0][0]
__________________________________________________________________________________________________
conv2_block6_0_relu (Activation (None, 64, 64, 224) 0 conv2_block6_0_bn[0][0]
__________________________________________________________________________________________________
conv2_block6_1_conv (Conv2D) (None, 64, 64, 128) 28672 conv2_block6_0_relu[0][0]
__________________________________________________________________________________________________
conv2_block6_1_bn (BatchNormali (None, 64, 64, 128) 512 conv2_block6_1_conv[0][0]
__________________________________________________________________________________________________
conv2_block6_1_relu (Activation (None, 64, 64, 128) 0 conv2_block6_1_bn[0][0]
__________________________________________________________________________________________________
conv2_block6_2_conv (Conv2D) (None, 64, 64, 32) 36864 conv2_block6_1_relu[0][0]
__________________________________________________________________________________________________
conv2_block6_concat (Concatenat (None, 64, 64, 256) 0 conv2_block5_concat[0][0]
conv2_block6_2_conv[0][0]
__________________________________________________________________________________________________
pool2_bn (BatchNormalization) (None, 64, 64, 256) 1024 conv2_block6_concat[0][0]
__________________________________________________________________________________________________
pool2_relu (Activation) (None, 64, 64, 256) 0 pool2_bn[0][0]
__________________________________________________________________________________________________
pool2_conv (Conv2D) (None, 64, 64, 128) 32768 pool2_relu[0][0]
__________________________________________________________________________________________________
pool2_pool (AveragePooling2D) (None, 32, 32, 128) 0 pool2_conv[0][0]
__________________________________________________________________________________________________
conv3_block1_0_bn (BatchNormali (None, 32, 32, 128) 512 pool2_pool[0][0]
__________________________________________________________________________________________________
conv3_block1_0_relu (Activation (None, 32, 32, 128) 0 conv3_block1_0_bn[0][0]
__________________________________________________________________________________________________
conv3_block1_1_conv (Conv2D) (None, 32, 32, 128) 16384 conv3_block1_0_relu[0][0]
__________________________________________________________________________________________________
conv3_block1_1_bn (BatchNormali (None, 32, 32, 128) 512 conv3_block1_1_conv[0][0]
__________________________________________________________________________________________________
conv3_block1_1_relu (Activation (None, 32, 32, 128) 0 conv3_block1_1_bn[0][0]
__________________________________________________________________________________________________
conv3_block1_2_conv (Conv2D) (None, 32, 32, 32) 36864 conv3_block1_1_relu[0][0]
__________________________________________________________________________________________________
conv3_block1_concat (Concatenat (None, 32, 32, 160) 0 pool2_pool[0][0]
conv3_block1_2_conv[0][0]
__________________________________________________________________________________________________
conv3_block2_0_bn (BatchNormali (None, 32, 32, 160) 640 conv3_block1_concat[0][0]
__________________________________________________________________________________________________
conv3_block2_0_relu (Activation (None, 32, 32, 160) 0 conv3_block2_0_bn[0][0]
__________________________________________________________________________________________________
conv3_block2_1_conv (Conv2D) (None, 32, 32, 128) 20480 conv3_block2_0_relu[0][0]
__________________________________________________________________________________________________
conv3_block2_1_bn (BatchNormali (None, 32, 32, 128) 512 conv3_block2_1_conv[0][0]
__________________________________________________________________________________________________
conv3_block2_1_relu (Activation (None, 32, 32, 128) 0 conv3_block2_1_bn[0][0]
__________________________________________________________________________________________________
conv3_block2_2_conv (Conv2D) (None, 32, 32, 32) 36864 conv3_block2_1_relu[0][0]
__________________________________________________________________________________________________
conv3_block2_concat (Concatenat (None, 32, 32, 192) 0 conv3_block1_concat[0][0]
conv3_block2_2_conv[0][0]
__________________________________________________________________________________________________
conv3_block3_0_bn (BatchNormali (None, 32, 32, 192) 768 conv3_block2_concat[0][0]
__________________________________________________________________________________________________
conv3_block3_0_relu (Activation (None, 32, 32, 192) 0 conv3_block3_0_bn[0][0]
__________________________________________________________________________________________________
conv3_block3_1_conv (Conv2D) (None, 32, 32, 128) 24576 conv3_block3_0_relu[0][0]
__________________________________________________________________________________________________
conv3_block3_1_bn (BatchNormali (None, 32, 32, 128) 512 conv3_block3_1_conv[0][0]
__________________________________________________________________________________________________
conv3_block3_1_relu (Activation (None, 32, 32, 128) 0 conv3_block3_1_bn[0][0]
__________________________________________________________________________________________________
conv3_block3_2_conv (Conv2D) (None, 32, 32, 32) 36864 conv3_block3_1_relu[0][0]
__________________________________________________________________________________________________
conv3_block3_concat (Concatenat (None, 32, 32, 224) 0 conv3_block2_concat[0][0]
conv3_block3_2_conv[0][0]
__________________________________________________________________________________________________
conv3_block4_0_bn (BatchNormali (None, 32, 32, 224) 896 conv3_block3_concat[0][0]
__________________________________________________________________________________________________
conv3_block4_0_relu (Activation (None, 32, 32, 224) 0 conv3_block4_0_bn[0][0]
__________________________________________________________________________________________________
conv3_block4_1_conv (Conv2D) (None, 32, 32, 128) 28672 conv3_block4_0_relu[0][0]
__________________________________________________________________________________________________
conv3_block4_1_bn (BatchNormali (None, 32, 32, 128) 512 conv3_block4_1_conv[0][0]
__________________________________________________________________________________________________
conv3_block4_1_relu (Activation (None, 32, 32, 128) 0 conv3_block4_1_bn[0][0]
__________________________________________________________________________________________________
conv3_block4_2_conv (Conv2D) (None, 32, 32, 32) 36864 conv3_block4_1_relu[0][0]
__________________________________________________________________________________________________
conv3_block4_concat (Concatenat (None, 32, 32, 256) 0 conv3_block3_concat[0][0]
conv3_block4_2_conv[0][0]
__________________________________________________________________________________________________
conv3_block5_0_bn (BatchNormali (None, 32, 32, 256) 1024 conv3_block4_concat[0][0]
__________________________________________________________________________________________________
conv3_block5_0_relu (Activation (None, 32, 32, 256) 0 conv3_block5_0_bn[0][0]
__________________________________________________________________________________________________
conv3_block5_1_conv (Conv2D) (None, 32, 32, 128) 32768 conv3_block5_0_relu[0][0]
__________________________________________________________________________________________________
conv3_block5_1_bn (BatchNormali (None, 32, 32, 128) 512 conv3_block5_1_conv[0][0]
__________________________________________________________________________________________________
conv3_block5_1_relu (Activation (None, 32, 32, 128) 0 conv3_block5_1_bn[0][0]
__________________________________________________________________________________________________
conv3_block5_2_conv (Conv2D) (None, 32, 32, 32) 36864 conv3_block5_1_relu[0][0]
__________________________________________________________________________________________________
conv3_block5_concat (Concatenat (None, 32, 32, 288) 0 conv3_block4_concat[0][0]
conv3_block5_2_conv[0][0]
__________________________________________________________________________________________________
conv3_block6_0_bn (BatchNormali (None, 32, 32, 288) 1152 conv3_block5_concat[0][0]
__________________________________________________________________________________________________
conv3_block6_0_relu (Activation (None, 32, 32, 288) 0 conv3_block6_0_bn[0][0]
__________________________________________________________________________________________________
conv3_block6_1_conv (Conv2D) (None, 32, 32, 128) 36864 conv3_block6_0_relu[0][0]
__________________________________________________________________________________________________
conv3_block6_1_bn (BatchNormali (None, 32, 32, 128) 512 conv3_block6_1_conv[0][0]
__________________________________________________________________________________________________
conv3_block6_1_relu (Activation (None, 32, 32, 128) 0 conv3_block6_1_bn[0][0]
__________________________________________________________________________________________________
conv3_block6_2_conv (Conv2D) (None, 32, 32, 32) 36864 conv3_block6_1_relu[0][0]
__________________________________________________________________________________________________
conv3_block6_concat (Concatenat (None, 32, 32, 320) 0 conv3_block5_concat[0][0]
conv3_block6_2_conv[0][0]
__________________________________________________________________________________________________
conv3_block7_0_bn (BatchNormali (None, 32, 32, 320) 1280 conv3_block6_concat[0][0]
__________________________________________________________________________________________________
conv3_block7_0_relu (Activation (None, 32, 32, 320) 0 conv3_block7_0_bn[0][0]
__________________________________________________________________________________________________
conv3_block7_1_conv (Conv2D) (None, 32, 32, 128) 40960 conv3_block7_0_relu[0][0]
__________________________________________________________________________________________________
conv3_block7_1_bn (BatchNormali (None, 32, 32, 128) 512 conv3_block7_1_conv[0][0]
__________________________________________________________________________________________________
conv3_block7_1_relu (Activation (None, 32, 32, 128) 0 conv3_block7_1_bn[0][0]
__________________________________________________________________________________________________
conv3_block7_2_conv (Conv2D) (None, 32, 32, 32) 36864 conv3_block7_1_relu[0][0]
__________________________________________________________________________________________________
conv3_block7_concat (Concatenat (None, 32, 32, 352) 0 conv3_block6_concat[0][0]
conv3_block7_2_conv[0][0]
__________________________________________________________________________________________________
conv3_block8_0_bn (BatchNormali (None, 32, 32, 352) 1408 conv3_block7_concat[0][0]
__________________________________________________________________________________________________
conv3_block8_0_relu (Activation (None, 32, 32, 352) 0 conv3_block8_0_bn[0][0]
__________________________________________________________________________________________________
conv3_block8_1_conv (Conv2D) (None, 32, 32, 128) 45056 conv3_block8_0_relu[0][0]
__________________________________________________________________________________________________
conv3_block8_1_bn (BatchNormali (None, 32, 32, 128) 512 conv3_block8_1_conv[0][0]
__________________________________________________________________________________________________
conv3_block8_1_relu (Activation (None, 32, 32, 128) 0 conv3_block8_1_bn[0][0]
__________________________________________________________________________________________________
conv3_block8_2_conv (Conv2D) (None, 32, 32, 32) 36864 conv3_block8_1_relu[0][0]
__________________________________________________________________________________________________
conv3_block8_concat (Concatenat (None, 32, 32, 384) 0 conv3_block7_concat[0][0]
conv3_block8_2_conv[0][0]
__________________________________________________________________________________________________
conv3_block9_0_bn (BatchNormali (None, 32, 32, 384) 1536 conv3_block8_concat[0][0]
__________________________________________________________________________________________________
conv3_block9_0_relu (Activation (None, 32, 32, 384) 0 conv3_block9_0_bn[0][0]
__________________________________________________________________________________________________
conv3_block9_1_conv (Conv2D) (None, 32, 32, 128) 49152 conv3_block9_0_relu[0][0]
__________________________________________________________________________________________________
conv3_block9_1_bn (BatchNormali (None, 32, 32, 128) 512 conv3_block9_1_conv[0][0]
__________________________________________________________________________________________________
conv3_block9_1_relu (Activation (None, 32, 32, 128) 0 conv3_block9_1_bn[0][0]
__________________________________________________________________________________________________
conv3_block9_2_conv (Conv2D) (None, 32, 32, 32) 36864 conv3_block9_1_relu[0][0]
__________________________________________________________________________________________________
conv3_block9_concat (Concatenat (None, 32, 32, 416) 0 conv3_block8_concat[0][0]
conv3_block9_2_conv[0][0]
__________________________________________________________________________________________________
conv3_block10_0_bn (BatchNormal (None, 32, 32, 416) 1664 conv3_block9_concat[0][0]
__________________________________________________________________________________________________
conv3_block10_0_relu (Activatio (None, 32, 32, 416) 0 conv3_block10_0_bn[0][0]
__________________________________________________________________________________________________
conv3_block10_1_conv (Conv2D) (None, 32, 32, 128) 53248 conv3_block10_0_relu[0][0]
__________________________________________________________________________________________________
conv3_block10_1_bn (BatchNormal (None, 32, 32, 128) 512 conv3_block10_1_conv[0][0]
__________________________________________________________________________________________________
conv3_block10_1_relu (Activatio (None, 32, 32, 128) 0 conv3_block10_1_bn[0][0]
__________________________________________________________________________________________________
conv3_block10_2_conv (Conv2D) (None, 32, 32, 32) 36864 conv3_block10_1_relu[0][0]
__________________________________________________________________________________________________
conv3_block10_concat (Concatena (None, 32, 32, 448) 0 conv3_block9_concat[0][0]
conv3_block10_2_conv[0][0]
__________________________________________________________________________________________________
conv3_block11_0_bn (BatchNormal (None, 32, 32, 448) 1792 conv3_block10_concat[0][0]
__________________________________________________________________________________________________
conv3_block11_0_relu (Activatio (None, 32, 32, 448) 0 conv3_block11_0_bn[0][0]
__________________________________________________________________________________________________
conv3_block11_1_conv (Conv2D) (None, 32, 32, 128) 57344 conv3_block11_0_relu[0][0]
__________________________________________________________________________________________________
conv3_block11_1_bn (BatchNormal (None, 32, 32, 128) 512 conv3_block11_1_conv[0][0]
__________________________________________________________________________________________________
conv3_block11_1_relu (Activatio (None, 32, 32, 128) 0 conv3_block11_1_bn[0][0]
__________________________________________________________________________________________________
conv3_block11_2_conv (Conv2D) (None, 32, 32, 32) 36864 conv3_block11_1_relu[0][0]
__________________________________________________________________________________________________
conv3_block11_concat (Concatena (None, 32, 32, 480) 0 conv3_block10_concat[0][0]
conv3_block11_2_conv[0][0]
__________________________________________________________________________________________________
conv3_block12_0_bn (BatchNormal (None, 32, 32, 480) 1920 conv3_block11_concat[0][0]
__________________________________________________________________________________________________
conv3_block12_0_relu (Activatio (None, 32, 32, 480) 0 conv3_block12_0_bn[0][0]
__________________________________________________________________________________________________
conv3_block12_1_conv (Conv2D) (None, 32, 32, 128) 61440 conv3_block12_0_relu[0][0]
__________________________________________________________________________________________________
conv3_block12_1_bn (BatchNormal (None, 32, 32, 128) 512 conv3_block12_1_conv[0][0]
__________________________________________________________________________________________________
conv3_block12_1_relu (Activatio (None, 32, 32, 128) 0 conv3_block12_1_bn[0][0]
__________________________________________________________________________________________________
conv3_block12_2_conv (Conv2D) (None, 32, 32, 32) 36864 conv3_block12_1_relu[0][0]
__________________________________________________________________________________________________
conv3_block12_concat (Concatena (None, 32, 32, 512) 0 conv3_block11_concat[0][0]
conv3_block12_2_conv[0][0]
__________________________________________________________________________________________________
pool3_bn (BatchNormalization) (None, 32, 32, 512) 2048 conv3_block12_concat[0][0]
__________________________________________________________________________________________________
pool3_relu (Activation) (None, 32, 32, 512) 0 pool3_bn[0][0]
__________________________________________________________________________________________________
pool3_conv (Conv2D) (None, 32, 32, 256) 131072 pool3_relu[0][0]
__________________________________________________________________________________________________
pool3_pool (AveragePooling2D) (None, 16, 16, 256) 0 pool3_conv[0][0]
__________________________________________________________________________________________________
conv4_block1_0_bn (BatchNormali (None, 16, 16, 256) 1024 pool3_pool[0][0]
__________________________________________________________________________________________________
conv4_block1_0_relu (Activation (None, 16, 16, 256) 0 conv4_block1_0_bn[0][0]
__________________________________________________________________________________________________
conv4_block1_1_conv (Conv2D) (None, 16, 16, 128) 32768 conv4_block1_0_relu[0][0]
__________________________________________________________________________________________________
conv4_block1_1_bn (BatchNormali (None, 16, 16, 128) 512 conv4_block1_1_conv[0][0]
__________________________________________________________________________________________________
conv4_block1_1_relu (Activation (None, 16, 16, 128) 0 conv4_block1_1_bn[0][0]
__________________________________________________________________________________________________
conv4_block1_2_conv (Conv2D) (None, 16, 16, 32) 36864 conv4_block1_1_relu[0][0]
__________________________________________________________________________________________________
conv4_block1_concat (Concatenat (None, 16, 16, 288) 0 pool3_pool[0][0]
conv4_block1_2_conv[0][0]
__________________________________________________________________________________________________
conv4_block2_0_bn (BatchNormali (None, 16, 16, 288) 1152 conv4_block1_concat[0][0]
__________________________________________________________________________________________________
conv4_block2_0_relu (Activation (None, 16, 16, 288) 0 conv4_block2_0_bn[0][0]
__________________________________________________________________________________________________
conv4_block2_1_conv (Conv2D) (None, 16, 16, 128) 36864 conv4_block2_0_relu[0][0]
__________________________________________________________________________________________________
conv4_block2_1_bn (BatchNormali (None, 16, 16, 128) 512 conv4_block2_1_conv[0][0]
__________________________________________________________________________________________________
conv4_block2_1_relu (Activation (None, 16, 16, 128) 0 conv4_block2_1_bn[0][0]
__________________________________________________________________________________________________
conv4_block2_2_conv (Conv2D) (None, 16, 16, 32) 36864 conv4_block2_1_relu[0][0]
__________________________________________________________________________________________________
conv4_block2_concat (Concatenat (None, 16, 16, 320) 0 conv4_block1_concat[0][0]
conv4_block2_2_conv[0][0]
__________________________________________________________________________________________________
conv4_block3_0_bn (BatchNormali (None, 16, 16, 320) 1280 conv4_block2_concat[0][0]
__________________________________________________________________________________________________
conv4_block3_0_relu (Activation (None, 16, 16, 320) 0 conv4_block3_0_bn[0][0]
__________________________________________________________________________________________________
conv4_block3_1_conv (Conv2D) (None, 16, 16, 128) 40960 conv4_block3_0_relu[0][0]
__________________________________________________________________________________________________
conv4_block3_1_bn (BatchNormali (None, 16, 16, 128) 512 conv4_block3_1_conv[0][0]
__________________________________________________________________________________________________
conv4_block3_1_relu (Activation (None, 16, 16, 128) 0 conv4_block3_1_bn[0][0]
__________________________________________________________________________________________________
conv4_block3_2_conv (Conv2D) (None, 16, 16, 32) 36864 conv4_block3_1_relu[0][0]
__________________________________________________________________________________________________
conv4_block3_concat (Concatenat (None, 16, 16, 352) 0 conv4_block2_concat[0][0]
conv4_block3_2_conv[0][0]
__________________________________________________________________________________________________
conv4_block4_0_bn (BatchNormali (None, 16, 16, 352) 1408 conv4_block3_concat[0][0]
__________________________________________________________________________________________________
conv4_block4_0_relu (Activation (None, 16, 16, 352) 0 conv4_block4_0_bn[0][0]
__________________________________________________________________________________________________
conv4_block4_1_conv (Conv2D) (None, 16, 16, 128) 45056 conv4_block4_0_relu[0][0]
__________________________________________________________________________________________________
conv4_block4_1_bn (BatchNormali (None, 16, 16, 128) 512 conv4_block4_1_conv[0][0]
__________________________________________________________________________________________________
conv4_block4_1_relu (Activation (None, 16, 16, 128) 0 conv4_block4_1_bn[0][0]
__________________________________________________________________________________________________
conv4_block4_2_conv (Conv2D) (None, 16, 16, 32) 36864 conv4_block4_1_relu[0][0]
__________________________________________________________________________________________________
conv4_block4_concat (Concatenat (None, 16, 16, 384) 0 conv4_block3_concat[0][0]
conv4_block4_2_conv[0][0]
__________________________________________________________________________________________________
conv4_block5_0_bn (BatchNormali (None, 16, 16, 384) 1536 conv4_block4_concat[0][0]
__________________________________________________________________________________________________
conv4_block5_0_relu (Activation (None, 16, 16, 384) 0 conv4_block5_0_bn[0][0]
__________________________________________________________________________________________________
conv4_block5_1_conv (Conv2D) (None, 16, 16, 128) 49152 conv4_block5_0_relu[0][0]
__________________________________________________________________________________________________
conv4_block5_1_bn (BatchNormali (None, 16, 16, 128) 512 conv4_block5_1_conv[0][0]
__________________________________________________________________________________________________
conv4_block5_1_relu (Activation (None, 16, 16, 128) 0 conv4_block5_1_bn[0][0]
__________________________________________________________________________________________________
conv4_block5_2_conv (Conv2D) (None, 16, 16, 32) 36864 conv4_block5_1_relu[0][0]
__________________________________________________________________________________________________
conv4_block5_concat (Concatenat (None, 16, 16, 416) 0 conv4_block4_concat[0][0]
conv4_block5_2_conv[0][0]
__________________________________________________________________________________________________
conv4_block6_0_bn (BatchNormali (None, 16, 16, 416) 1664 conv4_block5_concat[0][0]
__________________________________________________________________________________________________
conv4_block6_0_relu (Activation (None, 16, 16, 416) 0 conv4_block6_0_bn[0][0]
__________________________________________________________________________________________________
conv4_block6_1_conv (Conv2D) (None, 16, 16, 128) 53248 conv4_block6_0_relu[0][0]
__________________________________________________________________________________________________
conv4_block6_1_bn (BatchNormali (None, 16, 16, 128) 512 conv4_block6_1_conv[0][0]
__________________________________________________________________________________________________
conv4_block6_1_relu (Activation (None, 16, 16, 128) 0 conv4_block6_1_bn[0][0]
__________________________________________________________________________________________________
conv4_block6_2_conv (Conv2D) (None, 16, 16, 32) 36864 conv4_block6_1_relu[0][0]
__________________________________________________________________________________________________
conv4_block6_concat (Concatenat (None, 16, 16, 448) 0 conv4_block5_concat[0][0]
conv4_block6_2_conv[0][0]
__________________________________________________________________________________________________
conv4_block7_0_bn (BatchNormali (None, 16, 16, 448) 1792 conv4_block6_concat[0][0]
__________________________________________________________________________________________________
conv4_block7_0_relu (Activation (None, 16, 16, 448) 0 conv4_block7_0_bn[0][0]
__________________________________________________________________________________________________
conv4_block7_1_conv (Conv2D) (None, 16, 16, 128) 57344 conv4_block7_0_relu[0][0]
__________________________________________________________________________________________________
conv4_block7_1_bn (BatchNormali (None, 16, 16, 128) 512 conv4_block7_1_conv[0][0]
__________________________________________________________________________________________________
conv4_block7_1_relu (Activation (None, 16, 16, 128) 0 conv4_block7_1_bn[0][0]
__________________________________________________________________________________________________
conv4_block7_2_conv (Conv2D) (None, 16, 16, 32) 36864 conv4_block7_1_relu[0][0]
__________________________________________________________________________________________________
conv4_block7_concat (Concatenat (None, 16, 16, 480) 0 conv4_block6_concat[0][0]
conv4_block7_2_conv[0][0]
__________________________________________________________________________________________________
conv4_block8_0_bn (BatchNormali (None, 16, 16, 480) 1920 conv4_block7_concat[0][0]
__________________________________________________________________________________________________
conv4_block8_0_relu (Activation (None, 16, 16, 480) 0 conv4_block8_0_bn[0][0]
__________________________________________________________________________________________________
conv4_block8_1_conv (Conv2D) (None, 16, 16, 128) 61440 conv4_block8_0_relu[0][0]
__________________________________________________________________________________________________
conv4_block8_1_bn (BatchNormali (None, 16, 16, 128) 512 conv4_block8_1_conv[0][0]
__________________________________________________________________________________________________
conv4_block8_1_relu (Activation (None, 16, 16, 128) 0 conv4_block8_1_bn[0][0]
__________________________________________________________________________________________________
conv4_block8_2_conv (Conv2D) (None, 16, 16, 32) 36864 conv4_block8_1_relu[0][0]
__________________________________________________________________________________________________
conv4_block8_concat (Concatenat (None, 16, 16, 512) 0 conv4_block7_concat[0][0]
conv4_block8_2_conv[0][0]
__________________________________________________________________________________________________
conv4_block9_0_bn (BatchNormali (None, 16, 16, 512) 2048 conv4_block8_concat[0][0]
__________________________________________________________________________________________________
conv4_block9_0_relu (Activation (None, 16, 16, 512) 0 conv4_block9_0_bn[0][0]
__________________________________________________________________________________________________
conv4_block9_1_conv (Conv2D) (None, 16, 16, 128) 65536 conv4_block9_0_relu[0][0]
__________________________________________________________________________________________________
conv4_block9_1_bn (BatchNormali (None, 16, 16, 128) 512 conv4_block9_1_conv[0][0]
__________________________________________________________________________________________________
conv4_block9_1_relu (Activation (None, 16, 16, 128) 0 conv4_block9_1_bn[0][0]
__________________________________________________________________________________________________
conv4_block9_2_conv (Conv2D) (None, 16, 16, 32) 36864 conv4_block9_1_relu[0][0]
__________________________________________________________________________________________________
conv4_block9_concat (Concatenat (None, 16, 16, 544) 0 conv4_block8_concat[0][0]
conv4_block9_2_conv[0][0]
__________________________________________________________________________________________________
conv4_block10_0_bn (BatchNormal (None, 16, 16, 544) 2176 conv4_block9_concat[0][0]
__________________________________________________________________________________________________
conv4_block10_0_relu (Activatio (None, 16, 16, 544) 0 conv4_block10_0_bn[0][0]
__________________________________________________________________________________________________
conv4_block10_1_conv (Conv2D) (None, 16, 16, 128) 69632 conv4_block10_0_relu[0][0]
__________________________________________________________________________________________________
conv4_block10_1_bn (BatchNormal (None, 16, 16, 128) 512 conv4_block10_1_conv[0][0]
__________________________________________________________________________________________________
conv4_block10_1_relu (Activatio (None, 16, 16, 128) 0 conv4_block10_1_bn[0][0]
__________________________________________________________________________________________________
conv4_block10_2_conv (Conv2D) (None, 16, 16, 32) 36864 conv4_block10_1_relu[0][0]
__________________________________________________________________________________________________
conv4_block10_concat (Concatena (None, 16, 16, 576) 0 conv4_block9_concat[0][0]
conv4_block10_2_conv[0][0]
__________________________________________________________________________________________________
conv4_block11_0_bn (BatchNormal (None, 16, 16, 576) 2304 conv4_block10_concat[0][0]
__________________________________________________________________________________________________
conv4_block11_0_relu (Activatio (None, 16, 16, 576) 0 conv4_block11_0_bn[0][0]
__________________________________________________________________________________________________
conv4_block11_1_conv (Conv2D) (None, 16, 16, 128) 73728 conv4_block11_0_relu[0][0]
__________________________________________________________________________________________________
conv4_block11_1_bn (BatchNormal (None, 16, 16, 128) 512 conv4_block11_1_conv[0][0]
__________________________________________________________________________________________________
conv4_block11_1_relu (Activatio (None, 16, 16, 128) 0 conv4_block11_1_bn[0][0]
__________________________________________________________________________________________________
conv4_block11_2_conv (Conv2D) (None, 16, 16, 32) 36864 conv4_block11_1_relu[0][0]
__________________________________________________________________________________________________
conv4_block11_concat (Concatena (None, 16, 16, 608) 0 conv4_block10_concat[0][0]
conv4_block11_2_conv[0][0]
__________________________________________________________________________________________________
conv4_block12_0_bn (BatchNormal (None, 16, 16, 608) 2432 conv4_block11_concat[0][0]
__________________________________________________________________________________________________
conv4_block12_0_relu (Activatio (None, 16, 16, 608) 0 conv4_block12_0_bn[0][0]
__________________________________________________________________________________________________
conv4_block12_1_conv (Conv2D) (None, 16, 16, 128) 77824 conv4_block12_0_relu[0][0]
__________________________________________________________________________________________________
conv4_block12_1_bn (BatchNormal (None, 16, 16, 128) 512 conv4_block12_1_conv[0][0]
__________________________________________________________________________________________________
conv4_block12_1_relu (Activatio (None, 16, 16, 128) 0 conv4_block12_1_bn[0][0]
__________________________________________________________________________________________________
conv4_block12_2_conv (Conv2D) (None, 16, 16, 32) 36864 conv4_block12_1_relu[0][0]
__________________________________________________________________________________________________
conv4_block12_concat (Concatena (None, 16, 16, 640) 0 conv4_block11_concat[0][0]
conv4_block12_2_conv[0][0]
__________________________________________________________________________________________________
conv4_block13_0_bn (BatchNormal (None, 16, 16, 640) 2560 conv4_block12_concat[0][0]
__________________________________________________________________________________________________
conv4_block13_0_relu (Activatio (None, 16, 16, 640) 0 conv4_block13_0_bn[0][0]
__________________________________________________________________________________________________
conv4_block13_1_conv (Conv2D) (None, 16, 16, 128) 81920 conv4_block13_0_relu[0][0]
__________________________________________________________________________________________________
conv4_block13_1_bn (BatchNormal (None, 16, 16, 128) 512 conv4_block13_1_conv[0][0]
__________________________________________________________________________________________________
conv4_block13_1_relu (Activatio (None, 16, 16, 128) 0 conv4_block13_1_bn[0][0]
__________________________________________________________________________________________________
conv4_block13_2_conv (Conv2D) (None, 16, 16, 32) 36864 conv4_block13_1_relu[0][0]
__________________________________________________________________________________________________
conv4_block13_concat (Concatena (None, 16, 16, 672) 0 conv4_block12_concat[0][0]
conv4_block13_2_conv[0][0]
__________________________________________________________________________________________________
conv4_block14_0_bn (BatchNormal (None, 16, 16, 672) 2688 conv4_block13_concat[0][0]
__________________________________________________________________________________________________
conv4_block14_0_relu (Activatio (None, 16, 16, 672) 0 conv4_block14_0_bn[0][0]
__________________________________________________________________________________________________
conv4_block14_1_conv (Conv2D) (None, 16, 16, 128) 86016 conv4_block14_0_relu[0][0]
__________________________________________________________________________________________________
conv4_block14_1_bn (BatchNormal (None, 16, 16, 128) 512 conv4_block14_1_conv[0][0]
__________________________________________________________________________________________________
conv4_block14_1_relu (Activatio (None, 16, 16, 128) 0 conv4_block14_1_bn[0][0]
__________________________________________________________________________________________________
conv4_block14_2_conv (Conv2D) (None, 16, 16, 32) 36864 conv4_block14_1_relu[0][0]
__________________________________________________________________________________________________
conv4_block14_concat (Concatena (None, 16, 16, 704) 0 conv4_block13_concat[0][0]
conv4_block14_2_conv[0][0]
__________________________________________________________________________________________________
conv4_block15_0_bn (BatchNormal (None, 16, 16, 704) 2816 conv4_block14_concat[0][0]
__________________________________________________________________________________________________
conv4_block15_0_relu (Activatio (None, 16, 16, 704) 0 conv4_block15_0_bn[0][0]
__________________________________________________________________________________________________
conv4_block15_1_conv (Conv2D) (None, 16, 16, 128) 90112 conv4_block15_0_relu[0][0]
__________________________________________________________________________________________________
conv4_block15_1_bn (BatchNormal (None, 16, 16, 128) 512 conv4_block15_1_conv[0][0]
__________________________________________________________________________________________________
conv4_block15_1_relu (Activatio (None, 16, 16, 128) 0 conv4_block15_1_bn[0][0]
__________________________________________________________________________________________________
conv4_block15_2_conv (Conv2D) (None, 16, 16, 32) 36864 conv4_block15_1_relu[0][0]
__________________________________________________________________________________________________
conv4_block15_concat (Concatena (None, 16, 16, 736) 0 conv4_block14_concat[0][0]
conv4_block15_2_conv[0][0]
__________________________________________________________________________________________________
conv4_block16_0_bn (BatchNormal (None, 16, 16, 736) 2944 conv4_block15_concat[0][0]
__________________________________________________________________________________________________
conv4_block16_0_relu (Activatio (None, 16, 16, 736) 0 conv4_block16_0_bn[0][0]
__________________________________________________________________________________________________
conv4_block16_1_conv (Conv2D) (None, 16, 16, 128) 94208 conv4_block16_0_relu[0][0]
__________________________________________________________________________________________________
conv4_block16_1_bn (BatchNormal (None, 16, 16, 128) 512 conv4_block16_1_conv[0][0]
__________________________________________________________________________________________________
conv4_block16_1_relu (Activatio (None, 16, 16, 128) 0 conv4_block16_1_bn[0][0]
__________________________________________________________________________________________________
conv4_block16_2_conv (Conv2D) (None, 16, 16, 32) 36864 conv4_block16_1_relu[0][0]
__________________________________________________________________________________________________
conv4_block16_concat (Concatena (None, 16, 16, 768) 0 conv4_block15_concat[0][0]
conv4_block16_2_conv[0][0]
__________________________________________________________________________________________________
conv4_block17_0_bn (BatchNormal (None, 16, 16, 768) 3072 conv4_block16_concat[0][0]
__________________________________________________________________________________________________
conv4_block17_0_relu (Activatio (None, 16, 16, 768) 0 conv4_block17_0_bn[0][0]
__________________________________________________________________________________________________
conv4_block17_1_conv (Conv2D) (None, 16, 16, 128) 98304 conv4_block17_0_relu[0][0]
__________________________________________________________________________________________________
conv4_block17_1_bn (BatchNormal (None, 16, 16, 128) 512 conv4_block17_1_conv[0][0]
__________________________________________________________________________________________________
conv4_block17_1_relu (Activatio (None, 16, 16, 128) 0 conv4_block17_1_bn[0][0]
__________________________________________________________________________________________________
conv4_block17_2_conv (Conv2D) (None, 16, 16, 32) 36864 conv4_block17_1_relu[0][0]
__________________________________________________________________________________________________
conv4_block17_concat (Concatena (None, 16, 16, 800) 0 conv4_block16_concat[0][0]
conv4_block17_2_conv[0][0]
__________________________________________________________________________________________________
conv4_block18_0_bn (BatchNormal (None, 16, 16, 800) 3200 conv4_block17_concat[0][0]
__________________________________________________________________________________________________
conv4_block18_0_relu (Activatio (None, 16, 16, 800) 0 conv4_block18_0_bn[0][0]
__________________________________________________________________________________________________
conv4_block18_1_conv (Conv2D) (None, 16, 16, 128) 102400 conv4_block18_0_relu[0][0]
__________________________________________________________________________________________________
conv4_block18_1_bn (BatchNormal (None, 16, 16, 128) 512 conv4_block18_1_conv[0][0]
__________________________________________________________________________________________________
conv4_block18_1_relu (Activatio (None, 16, 16, 128) 0 conv4_block18_1_bn[0][0]
__________________________________________________________________________________________________
conv4_block18_2_conv (Conv2D) (None, 16, 16, 32) 36864 conv4_block18_1_relu[0][0]
__________________________________________________________________________________________________
conv4_block18_concat (Concatena (None, 16, 16, 832) 0 conv4_block17_concat[0][0]
conv4_block18_2_conv[0][0]
__________________________________________________________________________________________________
conv4_block19_0_bn (BatchNormal (None, 16, 16, 832) 3328 conv4_block18_concat[0][0]
__________________________________________________________________________________________________
conv4_block19_0_relu (Activatio (None, 16, 16, 832) 0 conv4_block19_0_bn[0][0]
__________________________________________________________________________________________________
conv4_block19_1_conv (Conv2D) (None, 16, 16, 128) 106496 conv4_block19_0_relu[0][0]
__________________________________________________________________________________________________
conv4_block19_1_bn (BatchNormal (None, 16, 16, 128) 512 conv4_block19_1_conv[0][0]
__________________________________________________________________________________________________
conv4_block19_1_relu (Activatio (None, 16, 16, 128) 0 conv4_block19_1_bn[0][0]
__________________________________________________________________________________________________
conv4_block19_2_conv (Conv2D) (None, 16, 16, 32) 36864 conv4_block19_1_relu[0][0]
__________________________________________________________________________________________________
conv4_block19_concat (Concatena (None, 16, 16, 864) 0 conv4_block18_concat[0][0]
conv4_block19_2_conv[0][0]
__________________________________________________________________________________________________
conv4_block20_0_bn (BatchNormal (None, 16, 16, 864) 3456 conv4_block19_concat[0][0]
__________________________________________________________________________________________________
conv4_block20_0_relu (Activatio (None, 16, 16, 864) 0 conv4_block20_0_bn[0][0]
__________________________________________________________________________________________________
conv4_block20_1_conv (Conv2D) (None, 16, 16, 128) 110592 conv4_block20_0_relu[0][0]
__________________________________________________________________________________________________
conv4_block20_1_bn (BatchNormal (None, 16, 16, 128) 512 conv4_block20_1_conv[0][0]
__________________________________________________________________________________________________
conv4_block20_1_relu (Activatio (None, 16, 16, 128) 0 conv4_block20_1_bn[0][0]
__________________________________________________________________________________________________
conv4_block20_2_conv (Conv2D) (None, 16, 16, 32) 36864 conv4_block20_1_relu[0][0]
__________________________________________________________________________________________________
conv4_block20_concat (Concatena (None, 16, 16, 896) 0 conv4_block19_concat[0][0]
conv4_block20_2_conv[0][0]
__________________________________________________________________________________________________
conv4_block21_0_bn (BatchNormal (None, 16, 16, 896) 3584 conv4_block20_concat[0][0]
__________________________________________________________________________________________________
conv4_block21_0_relu (Activatio (None, 16, 16, 896) 0 conv4_block21_0_bn[0][0]
__________________________________________________________________________________________________
conv4_block21_1_conv (Conv2D) (None, 16, 16, 128) 114688 conv4_block21_0_relu[0][0]
__________________________________________________________________________________________________
conv4_block21_1_bn (BatchNormal (None, 16, 16, 128) 512 conv4_block21_1_conv[0][0]
__________________________________________________________________________________________________
conv4_block21_1_relu (Activatio (None, 16, 16, 128) 0 conv4_block21_1_bn[0][0]
__________________________________________________________________________________________________
conv4_block21_2_conv (Conv2D) (None, 16, 16, 32) 36864 conv4_block21_1_relu[0][0]
__________________________________________________________________________________________________
conv4_block21_concat (Concatena (None, 16, 16, 928) 0 conv4_block20_concat[0][0]
conv4_block21_2_conv[0][0]
__________________________________________________________________________________________________
conv4_block22_0_bn (BatchNormal (None, 16, 16, 928) 3712 conv4_block21_concat[0][0]
__________________________________________________________________________________________________
conv4_block22_0_relu (Activatio (None, 16, 16, 928) 0 conv4_block22_0_bn[0][0]
__________________________________________________________________________________________________
conv4_block22_1_conv (Conv2D) (None, 16, 16, 128) 118784 conv4_block22_0_relu[0][0]
__________________________________________________________________________________________________
conv4_block22_1_bn (BatchNormal (None, 16, 16, 128) 512 conv4_block22_1_conv[0][0]
__________________________________________________________________________________________________
conv4_block22_1_relu (Activatio (None, 16, 16, 128) 0 conv4_block22_1_bn[0][0]
__________________________________________________________________________________________________
conv4_block22_2_conv (Conv2D) (None, 16, 16, 32) 36864 conv4_block22_1_relu[0][0]
__________________________________________________________________________________________________
conv4_block22_concat (Concatena (None, 16, 16, 960) 0 conv4_block21_concat[0][0]
conv4_block22_2_conv[0][0]
__________________________________________________________________________________________________
conv4_block23_0_bn (BatchNormal (None, 16, 16, 960) 3840 conv4_block22_concat[0][0]
__________________________________________________________________________________________________
conv4_block23_0_relu (Activatio (None, 16, 16, 960) 0 conv4_block23_0_bn[0][0]
__________________________________________________________________________________________________
conv4_block23_1_conv (Conv2D) (None, 16, 16, 128) 122880 conv4_block23_0_relu[0][0]
__________________________________________________________________________________________________
conv4_block23_1_bn (BatchNormal (None, 16, 16, 128) 512 conv4_block23_1_conv[0][0]
__________________________________________________________________________________________________
conv4_block23_1_relu (Activatio (None, 16, 16, 128) 0 conv4_block23_1_bn[0][0]
__________________________________________________________________________________________________
conv4_block23_2_conv (Conv2D) (None, 16, 16, 32) 36864 conv4_block23_1_relu[0][0]
__________________________________________________________________________________________________
conv4_block23_concat (Concatena (None, 16, 16, 992) 0 conv4_block22_concat[0][0]
conv4_block23_2_conv[0][0]
__________________________________________________________________________________________________
conv4_block24_0_bn (BatchNormal (None, 16, 16, 992) 3968 conv4_block23_concat[0][0]
__________________________________________________________________________________________________
conv4_block24_0_relu (Activatio (None, 16, 16, 992) 0 conv4_block24_0_bn[0][0]
__________________________________________________________________________________________________
conv4_block24_1_conv (Conv2D) (None, 16, 16, 128) 126976 conv4_block24_0_relu[0][0]
__________________________________________________________________________________________________
conv4_block24_1_bn (BatchNormal (None, 16, 16, 128) 512 conv4_block24_1_conv[0][0]
__________________________________________________________________________________________________
conv4_block24_1_relu (Activatio (None, 16, 16, 128) 0 conv4_block24_1_bn[0][0]
__________________________________________________________________________________________________
conv4_block24_2_conv (Conv2D) (None, 16, 16, 32) 36864 conv4_block24_1_relu[0][0]
__________________________________________________________________________________________________
conv4_block24_concat (Concatena (None, 16, 16, 1024) 0 conv4_block23_concat[0][0]
conv4_block24_2_conv[0][0]
__________________________________________________________________________________________________
pool4_bn (BatchNormalization) (None, 16, 16, 1024) 4096 conv4_block24_concat[0][0]
__________________________________________________________________________________________________
pool4_relu (Activation) (None, 16, 16, 1024) 0 pool4_bn[0][0]
__________________________________________________________________________________________________
pool4_conv (Conv2D) (None, 16, 16, 512) 524288 pool4_relu[0][0]
__________________________________________________________________________________________________
pool4_pool (AveragePooling2D) (None, 8, 8, 512) 0 pool4_conv[0][0]
__________________________________________________________________________________________________
conv5_block1_0_bn (BatchNormali (None, 8, 8, 512) 2048 pool4_pool[0][0]
__________________________________________________________________________________________________
conv5_block1_0_relu (Activation (None, 8, 8, 512) 0 conv5_block1_0_bn[0][0]
__________________________________________________________________________________________________
conv5_block1_1_conv (Conv2D) (None, 8, 8, 128) 65536 conv5_block1_0_relu[0][0]
__________________________________________________________________________________________________
conv5_block1_1_bn (BatchNormali (None, 8, 8, 128) 512 conv5_block1_1_conv[0][0]
__________________________________________________________________________________________________
conv5_block1_1_relu (Activation (None, 8, 8, 128) 0 conv5_block1_1_bn[0][0]
__________________________________________________________________________________________________
conv5_block1_2_conv (Conv2D) (None, 8, 8, 32) 36864 conv5_block1_1_relu[0][0]
__________________________________________________________________________________________________
conv5_block1_concat (Concatenat (None, 8, 8, 544) 0 pool4_pool[0][0]
conv5_block1_2_conv[0][0]
__________________________________________________________________________________________________
conv5_block2_0_bn (BatchNormali (None, 8, 8, 544) 2176 conv5_block1_concat[0][0]
__________________________________________________________________________________________________
conv5_block2_0_relu (Activation (None, 8, 8, 544) 0 conv5_block2_0_bn[0][0]
__________________________________________________________________________________________________
conv5_block2_1_conv (Conv2D) (None, 8, 8, 128) 69632 conv5_block2_0_relu[0][0]
__________________________________________________________________________________________________
conv5_block2_1_bn (BatchNormali (None, 8, 8, 128) 512 conv5_block2_1_conv[0][0]
__________________________________________________________________________________________________
conv5_block2_1_relu (Activation (None, 8, 8, 128) 0 conv5_block2_1_bn[0][0]
__________________________________________________________________________________________________
conv5_block2_2_conv (Conv2D) (None, 8, 8, 32) 36864 conv5_block2_1_relu[0][0]
__________________________________________________________________________________________________
conv5_block2_concat (Concatenat (None, 8, 8, 576) 0 conv5_block1_concat[0][0]
conv5_block2_2_conv[0][0]
__________________________________________________________________________________________________
conv5_block3_0_bn (BatchNormali (None, 8, 8, 576) 2304 conv5_block2_concat[0][0]
__________________________________________________________________________________________________
conv5_block3_0_relu (Activation (None, 8, 8, 576) 0 conv5_block3_0_bn[0][0]
__________________________________________________________________________________________________
conv5_block3_1_conv (Conv2D) (None, 8, 8, 128) 73728 conv5_block3_0_relu[0][0]
__________________________________________________________________________________________________
conv5_block3_1_bn (BatchNormali (None, 8, 8, 128) 512 conv5_block3_1_conv[0][0]
__________________________________________________________________________________________________
conv5_block3_1_relu (Activation (None, 8, 8, 128) 0 conv5_block3_1_bn[0][0]
__________________________________________________________________________________________________
conv5_block3_2_conv (Conv2D) (None, 8, 8, 32) 36864 conv5_block3_1_relu[0][0]
__________________________________________________________________________________________________
conv5_block3_concat (Concatenat (None, 8, 8, 608) 0 conv5_block2_concat[0][0]
conv5_block3_2_conv[0][0]
__________________________________________________________________________________________________
conv5_block4_0_bn (BatchNormali (None, 8, 8, 608) 2432 conv5_block3_concat[0][0]
__________________________________________________________________________________________________
conv5_block4_0_relu (Activation (None, 8, 8, 608) 0 conv5_block4_0_bn[0][0]
__________________________________________________________________________________________________
conv5_block4_1_conv (Conv2D) (None, 8, 8, 128) 77824 conv5_block4_0_relu[0][0]
__________________________________________________________________________________________________
conv5_block4_1_bn (BatchNormali (None, 8, 8, 128) 512 conv5_block4_1_conv[0][0]
__________________________________________________________________________________________________
conv5_block4_1_relu (Activation (None, 8, 8, 128) 0 conv5_block4_1_bn[0][0]
__________________________________________________________________________________________________
conv5_block4_2_conv (Conv2D) (None, 8, 8, 32) 36864 conv5_block4_1_relu[0][0]
__________________________________________________________________________________________________
conv5_block4_concat (Concatenat (None, 8, 8, 640) 0 conv5_block3_concat[0][0]
conv5_block4_2_conv[0][0]
__________________________________________________________________________________________________
conv5_block5_0_bn (BatchNormali (None, 8, 8, 640) 2560 conv5_block4_concat[0][0]
__________________________________________________________________________________________________
conv5_block5_0_relu (Activation (None, 8, 8, 640) 0 conv5_block5_0_bn[0][0]
__________________________________________________________________________________________________
conv5_block5_1_conv (Conv2D) (None, 8, 8, 128) 81920 conv5_block5_0_relu[0][0]
__________________________________________________________________________________________________
conv5_block5_1_bn (BatchNormali (None, 8, 8, 128) 512 conv5_block5_1_conv[0][0]
__________________________________________________________________________________________________
conv5_block5_1_relu (Activation (None, 8, 8, 128) 0 conv5_block5_1_bn[0][0]
__________________________________________________________________________________________________
conv5_block5_2_conv (Conv2D) (None, 8, 8, 32) 36864 conv5_block5_1_relu[0][0]
__________________________________________________________________________________________________
conv5_block5_concat (Concatenat (None, 8, 8, 672) 0 conv5_block4_concat[0][0]
conv5_block5_2_conv[0][0]
__________________________________________________________________________________________________
conv5_block6_0_bn (BatchNormali (None, 8, 8, 672) 2688 conv5_block5_concat[0][0]
__________________________________________________________________________________________________
conv5_block6_0_relu (Activation (None, 8, 8, 672) 0 conv5_block6_0_bn[0][0]
__________________________________________________________________________________________________
conv5_block6_1_conv (Conv2D) (None, 8, 8, 128) 86016 conv5_block6_0_relu[0][0]
__________________________________________________________________________________________________
conv5_block6_1_bn (BatchNormali (None, 8, 8, 128) 512 conv5_block6_1_conv[0][0]
__________________________________________________________________________________________________
conv5_block6_1_relu (Activation (None, 8, 8, 128) 0 conv5_block6_1_bn[0][0]
__________________________________________________________________________________________________
conv5_block6_2_conv (Conv2D) (None, 8, 8, 32) 36864 conv5_block6_1_relu[0][0]
__________________________________________________________________________________________________
conv5_block6_concat (Concatenat (None, 8, 8, 704) 0 conv5_block5_concat[0][0]
conv5_block6_2_conv[0][0]
__________________________________________________________________________________________________
conv5_block7_0_bn (BatchNormali (None, 8, 8, 704) 2816 conv5_block6_concat[0][0]
__________________________________________________________________________________________________
conv5_block7_0_relu (Activation (None, 8, 8, 704) 0 conv5_block7_0_bn[0][0]
__________________________________________________________________________________________________
conv5_block7_1_conv (Conv2D) (None, 8, 8, 128) 90112 conv5_block7_0_relu[0][0]
__________________________________________________________________________________________________
conv5_block7_1_bn (BatchNormali (None, 8, 8, 128) 512 conv5_block7_1_conv[0][0]
__________________________________________________________________________________________________
conv5_block7_1_relu (Activation (None, 8, 8, 128) 0 conv5_block7_1_bn[0][0]
__________________________________________________________________________________________________
conv5_block7_2_conv (Conv2D) (None, 8, 8, 32) 36864 conv5_block7_1_relu[0][0]
__________________________________________________________________________________________________
conv5_block7_concat (Concatenat (None, 8, 8, 736) 0 conv5_block6_concat[0][0]
conv5_block7_2_conv[0][0]
__________________________________________________________________________________________________
conv5_block8_0_bn (BatchNormali (None, 8, 8, 736) 2944 conv5_block7_concat[0][0]
__________________________________________________________________________________________________
conv5_block8_0_relu (Activation (None, 8, 8, 736) 0 conv5_block8_0_bn[0][0]
__________________________________________________________________________________________________
conv5_block8_1_conv (Conv2D) (None, 8, 8, 128) 94208 conv5_block8_0_relu[0][0]
__________________________________________________________________________________________________
conv5_block8_1_bn (BatchNormali (None, 8, 8, 128) 512 conv5_block8_1_conv[0][0]
__________________________________________________________________________________________________
conv5_block8_1_relu (Activation (None, 8, 8, 128) 0 conv5_block8_1_bn[0][0]
__________________________________________________________________________________________________
conv5_block8_2_conv (Conv2D) (None, 8, 8, 32) 36864 conv5_block8_1_relu[0][0]
__________________________________________________________________________________________________
conv5_block8_concat (Concatenat (None, 8, 8, 768) 0 conv5_block7_concat[0][0]
conv5_block8_2_conv[0][0]
__________________________________________________________________________________________________
conv5_block9_0_bn (BatchNormali (None, 8, 8, 768) 3072 conv5_block8_concat[0][0]
__________________________________________________________________________________________________
conv5_block9_0_relu (Activation (None, 8, 8, 768) 0 conv5_block9_0_bn[0][0]
__________________________________________________________________________________________________
conv5_block9_1_conv (Conv2D) (None, 8, 8, 128) 98304 conv5_block9_0_relu[0][0]
__________________________________________________________________________________________________
conv5_block9_1_bn (BatchNormali (None, 8, 8, 128) 512 conv5_block9_1_conv[0][0]
__________________________________________________________________________________________________
conv5_block9_1_relu (Activation (None, 8, 8, 128) 0 conv5_block9_1_bn[0][0]
__________________________________________________________________________________________________
conv5_block9_2_conv (Conv2D) (None, 8, 8, 32) 36864 conv5_block9_1_relu[0][0]
__________________________________________________________________________________________________
conv5_block9_concat (Concatenat (None, 8, 8, 800) 0 conv5_block8_concat[0][0]
conv5_block9_2_conv[0][0]
__________________________________________________________________________________________________
conv5_block10_0_bn (BatchNormal (None, 8, 8, 800) 3200 conv5_block9_concat[0][0]
__________________________________________________________________________________________________
conv5_block10_0_relu (Activatio (None, 8, 8, 800) 0 conv5_block10_0_bn[0][0]
__________________________________________________________________________________________________
conv5_block10_1_conv (Conv2D) (None, 8, 8, 128) 102400 conv5_block10_0_relu[0][0]
__________________________________________________________________________________________________
conv5_block10_1_bn (BatchNormal (None, 8, 8, 128) 512 conv5_block10_1_conv[0][0]
__________________________________________________________________________________________________
conv5_block10_1_relu (Activatio (None, 8, 8, 128) 0 conv5_block10_1_bn[0][0]
__________________________________________________________________________________________________
conv5_block10_2_conv (Conv2D) (None, 8, 8, 32) 36864 conv5_block10_1_relu[0][0]
__________________________________________________________________________________________________
conv5_block10_concat (Concatena (None, 8, 8, 832) 0 conv5_block9_concat[0][0]
conv5_block10_2_conv[0][0]
__________________________________________________________________________________________________
conv5_block11_0_bn (BatchNormal (None, 8, 8, 832) 3328 conv5_block10_concat[0][0]
__________________________________________________________________________________________________
conv5_block11_0_relu (Activatio (None, 8, 8, 832) 0 conv5_block11_0_bn[0][0]
__________________________________________________________________________________________________
conv5_block11_1_conv (Conv2D) (None, 8, 8, 128) 106496 conv5_block11_0_relu[0][0]
__________________________________________________________________________________________________
conv5_block11_1_bn (BatchNormal (None, 8, 8, 128) 512 conv5_block11_1_conv[0][0]
__________________________________________________________________________________________________
conv5_block11_1_relu (Activatio (None, 8, 8, 128) 0 conv5_block11_1_bn[0][0]
__________________________________________________________________________________________________
conv5_block11_2_conv (Conv2D) (None, 8, 8, 32) 36864 conv5_block11_1_relu[0][0]
__________________________________________________________________________________________________
conv5_block11_concat (Concatena (None, 8, 8, 864) 0 conv5_block10_concat[0][0]
conv5_block11_2_conv[0][0]
__________________________________________________________________________________________________
conv5_block12_0_bn (BatchNormal (None, 8, 8, 864) 3456 conv5_block11_concat[0][0]
__________________________________________________________________________________________________
conv5_block12_0_relu (Activatio (None, 8, 8, 864) 0 conv5_block12_0_bn[0][0]
__________________________________________________________________________________________________
conv5_block12_1_conv (Conv2D) (None, 8, 8, 128) 110592 conv5_block12_0_relu[0][0]
__________________________________________________________________________________________________
conv5_block12_1_bn (BatchNormal (None, 8, 8, 128) 512 conv5_block12_1_conv[0][0]
__________________________________________________________________________________________________
conv5_block12_1_relu (Activatio (None, 8, 8, 128) 0 conv5_block12_1_bn[0][0]
__________________________________________________________________________________________________
conv5_block12_2_conv (Conv2D) (None, 8, 8, 32) 36864 conv5_block12_1_relu[0][0]
__________________________________________________________________________________________________
conv5_block12_concat (Concatena (None, 8, 8, 896) 0 conv5_block11_concat[0][0]
conv5_block12_2_conv[0][0]
__________________________________________________________________________________________________
conv5_block13_0_bn (BatchNormal (None, 8, 8, 896) 3584 conv5_block12_concat[0][0]
__________________________________________________________________________________________________
conv5_block13_0_relu (Activatio (None, 8, 8, 896) 0 conv5_block13_0_bn[0][0]
__________________________________________________________________________________________________
conv5_block13_1_conv (Conv2D) (None, 8, 8, 128) 114688 conv5_block13_0_relu[0][0]
__________________________________________________________________________________________________
conv5_block13_1_bn (BatchNormal (None, 8, 8, 128) 512 conv5_block13_1_conv[0][0]
__________________________________________________________________________________________________
conv5_block13_1_relu (Activatio (None, 8, 8, 128) 0 conv5_block13_1_bn[0][0]
__________________________________________________________________________________________________
conv5_block13_2_conv (Conv2D) (None, 8, 8, 32) 36864 conv5_block13_1_relu[0][0]
__________________________________________________________________________________________________
conv5_block13_concat (Concatena (None, 8, 8, 928) 0 conv5_block12_concat[0][0]
conv5_block13_2_conv[0][0]
__________________________________________________________________________________________________
conv5_block14_0_bn (BatchNormal (None, 8, 8, 928) 3712 conv5_block13_concat[0][0]
__________________________________________________________________________________________________
conv5_block14_0_relu (Activatio (None, 8, 8, 928) 0 conv5_block14_0_bn[0][0]
__________________________________________________________________________________________________
conv5_block14_1_conv (Conv2D) (None, 8, 8, 128) 118784 conv5_block14_0_relu[0][0]
__________________________________________________________________________________________________
conv5_block14_1_bn (BatchNormal (None, 8, 8, 128) 512 conv5_block14_1_conv[0][0]
__________________________________________________________________________________________________
conv5_block14_1_relu (Activatio (None, 8, 8, 128) 0 conv5_block14_1_bn[0][0]
__________________________________________________________________________________________________
conv5_block14_2_conv (Conv2D) (None, 8, 8, 32) 36864 conv5_block14_1_relu[0][0]
__________________________________________________________________________________________________
conv5_block14_concat (Concatena (None, 8, 8, 960) 0 conv5_block13_concat[0][0]
conv5_block14_2_conv[0][0]
__________________________________________________________________________________________________
conv5_block15_0_bn (BatchNormal (None, 8, 8, 960) 3840 conv5_block14_concat[0][0]
__________________________________________________________________________________________________
conv5_block15_0_relu (Activatio (None, 8, 8, 960) 0 conv5_block15_0_bn[0][0]
__________________________________________________________________________________________________
conv5_block15_1_conv (Conv2D) (None, 8, 8, 128) 122880 conv5_block15_0_relu[0][0]
__________________________________________________________________________________________________
conv5_block15_1_bn (BatchNormal (None, 8, 8, 128) 512 conv5_block15_1_conv[0][0]
__________________________________________________________________________________________________
conv5_block15_1_relu (Activatio (None, 8, 8, 128) 0 conv5_block15_1_bn[0][0]
__________________________________________________________________________________________________
conv5_block15_2_conv (Conv2D) (None, 8, 8, 32) 36864 conv5_block15_1_relu[0][0]
__________________________________________________________________________________________________
conv5_block15_concat (Concatena (None, 8, 8, 992) 0 conv5_block14_concat[0][0]
conv5_block15_2_conv[0][0]
__________________________________________________________________________________________________
conv5_block16_0_bn (BatchNormal (None, 8, 8, 992) 3968 conv5_block15_concat[0][0]
__________________________________________________________________________________________________
conv5_block16_0_relu (Activatio (None, 8, 8, 992) 0 conv5_block16_0_bn[0][0]
__________________________________________________________________________________________________
conv5_block16_1_conv (Conv2D) (None, 8, 8, 128) 126976 conv5_block16_0_relu[0][0]
__________________________________________________________________________________________________
conv5_block16_1_bn (BatchNormal (None, 8, 8, 128) 512 conv5_block16_1_conv[0][0]
__________________________________________________________________________________________________
conv5_block16_1_relu (Activatio (None, 8, 8, 128) 0 conv5_block16_1_bn[0][0]
__________________________________________________________________________________________________
conv5_block16_2_conv (Conv2D) (None, 8, 8, 32) 36864 conv5_block16_1_relu[0][0]
__________________________________________________________________________________________________
conv5_block16_concat (Concatena (None, 8, 8, 1024) 0 conv5_block15_concat[0][0]
conv5_block16_2_conv[0][0]
__________________________________________________________________________________________________
bn (BatchNormalization) (None, 8, 8, 1024) 4096 conv5_block16_concat[0][0]
__________________________________________________________________________________________________
relu (Activation) (None, 8, 8, 1024) 0 bn[0][0]
__________________________________________________________________________________________________
dense_3 (Dense) (None, 8, 8, 1) 1025 relu[0][0]
__________________________________________________________________________________________________
up_sampling2d_3 (UpSampling2D) (None, 256, 256, 1) 0 dense_3[0][0]
==================================================================================================
Total params: 7,032,257
Trainable params: 6,948,609
Non-trainable params: 83,648
__________________________________________________________________________________________________
The UNET architecture contains two paths. First path is the contraction path (also called the encoder) which is used to capture the context in the image. The encoder is just a traditional stack of convolutional and max pooling layers.
from tensorflow.keras import Model
from tensorflow.keras.applications.mobilenet import MobileNet, preprocess_input
from tensorflow.keras.callbacks import ModelCheckpoint, EarlyStopping, ReduceLROnPlateau
from tensorflow.keras.layers import Concatenate, Conv2D, UpSampling2D, Reshape
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import ZeroPadding2D, concatenate,Convolution2D, MaxPooling2D, Dropout, Flatten, Activation, BatchNormalization,MaxPool2D,Dense,Input,Concatenate
from tensorflow.keras.utils import Sequence
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.losses import binary_crossentropy
import cv2
from skimage.transform import resize
inputs = Input((128,128,1))
#zeropadding = tf.keras.layers.ZeroPadding2D(padding=1)(inputs)
bn1 = BatchNormalization(momentum=0.9)(inputs)
conv1 = Conv2D(32, (3,3), activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(bn1)
conv1 = Conv2D(32, (3,3), activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv1)
pool1 = MaxPooling2D(pool_size=(2, 2))(conv1)
bn2 = BatchNormalization(momentum=0.9)(pool1)
conv2 = Conv2D(64, (3,3), activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(bn2)
drop12 = Dropout(0.2)(conv2)
conv2 = Conv2D(64, (3,3), activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(drop12)
pool2 = MaxPooling2D(pool_size=(2, 2))(conv2)
bn3 = BatchNormalization(momentum=0.9)(pool2)
conv3 = Conv2D(128, (3,3), activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(bn3)
drop13 = Dropout(0.2)(conv3)
conv3 = Conv2D(128, (3,3), activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(drop13)
pool3 = MaxPooling2D(pool_size=(2, 2))(conv3)
bn4 = BatchNormalization(momentum=0.9)(pool3)
conv4 = Conv2D(256, (3,3), activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(bn4)
conv4 = Conv2D(256, (3,3), activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv4)
drop4 = Dropout(0.5)(conv4)
pool4 = MaxPooling2D(pool_size=(2, 2))(drop4)
bn5 = BatchNormalization(momentum=0.9)(pool4)
conv5 = Conv2D(512, (3,3), activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(bn5)
conv5 = Conv2D(512, (3,3), activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv5)
drop5 = Dropout(0.5)(conv5)
bn6 = BatchNormalization(momentum=0.9)(drop5)
up6 = Conv2D(256, (2,2), activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(bn6))
merge6 = concatenate([drop4,up6], axis = 3)
conv6 = Conv2D(256, (3,3), activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge6)
conv6 = Conv2D(256, (3,3), activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv6)
bn7 = BatchNormalization(momentum=0.0)(conv6)
up7 = Conv2D(128, (3,3), activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(bn7))
merge7 = concatenate([conv3,up7], axis = 3)
conv7 = Conv2D(128, (3,3), activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge7)
conv7 = Conv2D(128, (3,3), activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv7)
bn9 = BatchNormalization(momentum=0.0)(conv7)
up8 = Conv2D(64, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(bn9))
merge8 = concatenate([conv2,up8], axis = 3)
conv8 = Conv2D(64, (3,3), activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge8)
conv8 = Conv2D(64, (3,3), activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv8)
bn8 = BatchNormalization(momentum=0.0)(conv8)
up9 = Conv2D(32, 2, activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(UpSampling2D(size = (2,2))(bn8))
merge9 = concatenate([conv1,up9], axis = 3)
conv9 = Conv2D(32, (3,3), activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(merge9)
conv9 = Conv2D(32, (3,3), activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv9)
conv9 = Conv2D(2, (3,3), activation = 'relu', padding = 'same', kernel_initializer = 'he_normal')(conv9)
conv10 = Conv2D(1,(1,1), activation = 'sigmoid')(conv9)
model_unet = Model(inputs=[inputs],outputs=[conv10])
optimizer1 = Adam(lr=0.0001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False)
model_unet.compile(optimizer = Adam(lr = 0.0001), loss = iou_bce_loss, metrics = ['accuracy',mean_iou])
model_unet.summary()
Model: "model"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) [(None, 128, 128, 1) 0
__________________________________________________________________________________________________
batch_normalization (BatchNorma (None, 128, 128, 1) 4 input_1[0][0]
__________________________________________________________________________________________________
conv2d (Conv2D) (None, 128, 128, 32) 320 batch_normalization[0][0]
__________________________________________________________________________________________________
conv2d_1 (Conv2D) (None, 128, 128, 32) 9248 conv2d[0][0]
__________________________________________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 64, 64, 32) 0 conv2d_1[0][0]
__________________________________________________________________________________________________
batch_normalization_1 (BatchNor (None, 64, 64, 32) 128 max_pooling2d[0][0]
__________________________________________________________________________________________________
conv2d_2 (Conv2D) (None, 64, 64, 64) 18496 batch_normalization_1[0][0]
__________________________________________________________________________________________________
dropout (Dropout) (None, 64, 64, 64) 0 conv2d_2[0][0]
__________________________________________________________________________________________________
conv2d_3 (Conv2D) (None, 64, 64, 64) 36928 dropout[0][0]
__________________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D) (None, 32, 32, 64) 0 conv2d_3[0][0]
__________________________________________________________________________________________________
batch_normalization_2 (BatchNor (None, 32, 32, 64) 256 max_pooling2d_1[0][0]
__________________________________________________________________________________________________
conv2d_4 (Conv2D) (None, 32, 32, 128) 73856 batch_normalization_2[0][0]
__________________________________________________________________________________________________
dropout_1 (Dropout) (None, 32, 32, 128) 0 conv2d_4[0][0]
__________________________________________________________________________________________________
conv2d_5 (Conv2D) (None, 32, 32, 128) 147584 dropout_1[0][0]
__________________________________________________________________________________________________
max_pooling2d_2 (MaxPooling2D) (None, 16, 16, 128) 0 conv2d_5[0][0]
__________________________________________________________________________________________________
batch_normalization_3 (BatchNor (None, 16, 16, 128) 512 max_pooling2d_2[0][0]
__________________________________________________________________________________________________
conv2d_6 (Conv2D) (None, 16, 16, 256) 295168 batch_normalization_3[0][0]
__________________________________________________________________________________________________
conv2d_7 (Conv2D) (None, 16, 16, 256) 590080 conv2d_6[0][0]
__________________________________________________________________________________________________
dropout_2 (Dropout) (None, 16, 16, 256) 0 conv2d_7[0][0]
__________________________________________________________________________________________________
max_pooling2d_3 (MaxPooling2D) (None, 8, 8, 256) 0 dropout_2[0][0]
__________________________________________________________________________________________________
batch_normalization_4 (BatchNor (None, 8, 8, 256) 1024 max_pooling2d_3[0][0]
__________________________________________________________________________________________________
conv2d_8 (Conv2D) (None, 8, 8, 512) 1180160 batch_normalization_4[0][0]
__________________________________________________________________________________________________
conv2d_9 (Conv2D) (None, 8, 8, 512) 2359808 conv2d_8[0][0]
__________________________________________________________________________________________________
dropout_3 (Dropout) (None, 8, 8, 512) 0 conv2d_9[0][0]
__________________________________________________________________________________________________
batch_normalization_5 (BatchNor (None, 8, 8, 512) 2048 dropout_3[0][0]
__________________________________________________________________________________________________
up_sampling2d (UpSampling2D) (None, 16, 16, 512) 0 batch_normalization_5[0][0]
__________________________________________________________________________________________________
conv2d_10 (Conv2D) (None, 16, 16, 256) 524544 up_sampling2d[0][0]
__________________________________________________________________________________________________
concatenate (Concatenate) (None, 16, 16, 512) 0 dropout_2[0][0]
conv2d_10[0][0]
__________________________________________________________________________________________________
conv2d_11 (Conv2D) (None, 16, 16, 256) 1179904 concatenate[0][0]
__________________________________________________________________________________________________
conv2d_12 (Conv2D) (None, 16, 16, 256) 590080 conv2d_11[0][0]
__________________________________________________________________________________________________
batch_normalization_6 (BatchNor (None, 16, 16, 256) 1024 conv2d_12[0][0]
__________________________________________________________________________________________________
up_sampling2d_1 (UpSampling2D) (None, 32, 32, 256) 0 batch_normalization_6[0][0]
__________________________________________________________________________________________________
conv2d_13 (Conv2D) (None, 32, 32, 128) 295040 up_sampling2d_1[0][0]
__________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 32, 32, 256) 0 conv2d_5[0][0]
conv2d_13[0][0]
__________________________________________________________________________________________________
conv2d_14 (Conv2D) (None, 32, 32, 128) 295040 concatenate_1[0][0]
__________________________________________________________________________________________________
conv2d_15 (Conv2D) (None, 32, 32, 128) 147584 conv2d_14[0][0]
__________________________________________________________________________________________________
batch_normalization_7 (BatchNor (None, 32, 32, 128) 512 conv2d_15[0][0]
__________________________________________________________________________________________________
up_sampling2d_2 (UpSampling2D) (None, 64, 64, 128) 0 batch_normalization_7[0][0]
__________________________________________________________________________________________________
conv2d_16 (Conv2D) (None, 64, 64, 64) 32832 up_sampling2d_2[0][0]
__________________________________________________________________________________________________
concatenate_2 (Concatenate) (None, 64, 64, 128) 0 conv2d_3[0][0]
conv2d_16[0][0]
__________________________________________________________________________________________________
conv2d_17 (Conv2D) (None, 64, 64, 64) 73792 concatenate_2[0][0]
__________________________________________________________________________________________________
conv2d_18 (Conv2D) (None, 64, 64, 64) 36928 conv2d_17[0][0]
__________________________________________________________________________________________________
batch_normalization_8 (BatchNor (None, 64, 64, 64) 256 conv2d_18[0][0]
__________________________________________________________________________________________________
up_sampling2d_3 (UpSampling2D) (None, 128, 128, 64) 0 batch_normalization_8[0][0]
__________________________________________________________________________________________________
conv2d_19 (Conv2D) (None, 128, 128, 32) 8224 up_sampling2d_3[0][0]
__________________________________________________________________________________________________
concatenate_3 (Concatenate) (None, 128, 128, 64) 0 conv2d_1[0][0]
conv2d_19[0][0]
__________________________________________________________________________________________________
conv2d_20 (Conv2D) (None, 128, 128, 32) 18464 concatenate_3[0][0]
__________________________________________________________________________________________________
conv2d_21 (Conv2D) (None, 128, 128, 32) 9248 conv2d_20[0][0]
__________________________________________________________________________________________________
conv2d_22 (Conv2D) (None, 128, 128, 2) 578 conv2d_21[0][0]
__________________________________________________________________________________________________
conv2d_23 (Conv2D) (None, 128, 128, 1) 3 conv2d_22[0][0]
==================================================================================================
Total params: 7,929,673
Trainable params: 7,926,791
Non-trainable params: 2,882
__________________________________________________________________________________________________
folder = '/content/drive/My Drive/stage_2_train_images'
train_gen = generator(folder, train_filenames, pneumonia_locations, batch_size=42, image_size=128, shuffle=True, augment=True, predict=False)
valid_gen = generator(folder, valid_filenames, pneumonia_locations, batch_size=42, image_size=128, shuffle=False, predict=False)
model_unet.load_weights('/content/drive/My Drive/model_unet_weights6.h5')
history3 = model_unet.fit_generator(train_gen, validation_data=valid_gen, callbacks=[learning_rate], epochs=2)
model_unet.save_weights('/content/drive/My Drive/model_unet_weights8.h5')
Epoch 1/2 57/57 [==============================] - 1916s 34s/step - loss: 0.2912 - accuracy: 0.9595 - mean_iou: 0.7167 - val_loss: 0.2922 - val_accuracy: 0.9559 - val_mean_iou: 0.6721 - lr: 0.0010 Epoch 2/2 57/57 [==============================] - 1832s 32s/step - loss: 0.2849 - accuracy: 0.9608 - mean_iou: 0.7123 - val_loss: 0.2899 - val_accuracy: 0.9571 - val_mean_iou: 0.6862 - lr: 9.9606e-04
model_unet.load_weights('/content/drive/My Drive/model_unet_weights6.h5')
history3 = model_unet.fit_generator(train_gen, validation_data=valid_gen, callbacks=[learning_rate], epochs=4)
model_unet.save_weights('/content/drive/My Drive/model_unet_weights8.h5')
57/57 [==============================] - 1953s 34s/step - loss: 0.2996 - accuracy: 0.9586 - mean_iou: 0.7001 - val_loss: 0.2858 - val_accuracy: 0.9559 - val_mean_iou: 0.7037 - lr: 0.0010 Epoch 2/4 57/57 [==============================] - 1845s 32s/step - loss: 0.2758 - accuracy: 0.9628 - mean_iou: 0.7289 - val_loss: 0.2849 - val_accuracy: 0.9538 - val_mean_iou: 0.7080 - lr: 9.9606e-04 Epoch 3/4 43/57 [=====================>........] - ETA: 7:05 - loss: 0.2744 - accuracy: 0.9637 - mean_iou: 0.7309
plt.figure(figsize=(12,4))
plt.subplot(131)
plt.plot(history3.epoch, history3.history["loss"], label="Train loss")
plt.plot(history3.epoch, history3.history["val_loss"], label="Valid loss")
plt.legend()
plt.subplot(132)
plt.plot(history3.epoch, history3.history["accuracy"], label="Train accuracy")
plt.plot(history3.epoch, history3.history["val_accuracy"], label="Valid accuracy")
plt.legend()
plt.subplot(133)
plt.plot(history3.epoch, history3.history["mean_iou"], label="Train iou")
plt.plot(history3.epoch, history3.history["val_mean_iou"], label="Valid iou")
plt.legend()
plt.show()
for imgs, msks in valid_gen:
# predict batch of images
preds = model_unet.predict(imgs)
# create figure
f, axarr = plt.subplots(4, 8, figsize=(20,15))
axarr = axarr.ravel()
axidx = 0
# loop through batch
for img, msk, pred in zip(imgs, msks, preds):
# plot image
axarr[axidx].imshow(img[:, :, 0])
# threshold true mask
comp = msk[:, :, 0] > 0.5
# apply connected components
comp = measure.label(comp)
# apply bounding boxes
predictionString = ''
for region in measure.regionprops(comp):
# retrieve x, y, height and width
y, x, y2, x2 = region.bbox
height = y2 - y
width = x2 - x
axarr[axidx].add_patch(patches.Rectangle((x,y),width,height,linewidth=2,edgecolor='b',facecolor='none'))
# threshold predicted mask
comp = pred[:, :, 0] > 0.5
# apply connected components
comp = measure.label(comp)
# apply bounding boxes
predictionString = ''
for region in measure.regionprops(comp):
# retrieve x, y, height and width
y, x, y2, x2 = region.bbox
height = y2 - y
width = x2 - x
conf = np.mean(pred[y:y+height, x:x+width])
if conf>0.5:
axarr[axidx].add_patch(patches.Rectangle((x,y),width,height,linewidth=2,edgecolor='r',facecolor='none'))
axidx += 1
if axidx==32:
break
plt.show()
# only plot one batch
break
folder = 'stage_2_test_images'
test_filenames = os.listdir(folder)
# create test generator with predict flag set to True
test_gen = generator(folder, test_filenames, None, batch_size=2, image_size=128, shuffle=False, predict=True)
print('n test samples:', len(test_filenames))
f, axarr = plt.subplots(3, 5, figsize=(20,10))
axarr = axarr.ravel()
axidx = 0
# create submission dictionary
submission_dict = {}
# loop through testset
for imgs, filenames in test_gen:
# predict batch of images
preds = model_unet.predict(imgs)
#axarr[axidx].imshow(resize(imgs[:, :, 0],(128, 128),mode='reflect'))
# loop through batch
for img,pred, filename in zip(imgs,preds, filenames):
# resize predicted mask
#pred = resize(pred, (1024, 1024), mode='reflect')
axarr[axidx].imshow(img[:, :, 0])
# threshold predicted mask
comp = pred[:, :, 0] > 0.7
# apply connected components
comp = measure.label(comp)
# apply bounding boxes
predictionString = ''
for region in measure.regionprops(comp):
# retrieve x, y, height and width
y, x, y2, x2 = region.bbox
height = y2 - y
width = x2 - x
# proxy for confidence score
conf = np.mean(pred[y:y+height, x:x+width])
# add to predictionString
if conf>0.76:
predictionString += str(conf) + ' ' + str(x) + ' ' + str(y) + ' ' + str(width) + ' ' + str(height) + ' '
axarr[axidx].add_patch(patches.Rectangle((x,y),width,height,linewidth=2,edgecolor='r',facecolor='none'))
# add filename and predictionString to dictionary
filename = filename.split('.')[0]
submission_dict[filename] = predictionString
#print("--------------------------------------------------------------------------")
#print(axidx)
#print(predictionString)
axidx += 1
if axidx >= 15: #len(test_filenames)
break
plt.show()
n test samples: 3000
In this project , implemented three different models (ResNet,U-Net and Dense Net) to identify the pneumonia in lungs.These all three models evaluated with different hyper parameters and different sets of images. Initially we started the evaluation with below 1000 images and kept on increasing the images and captured the Model Accuracy and validation accuracy.For model evaluation ,we have used Jaccord index and Mean IOU technique.In Initial stage , we have achieved the accuracy of 94%,92% and 96% using Resnet,Dense and Unet respectively. To further improve the performance of models, we performed parameter tuning like modifying learning rate, batch size , dropout and batch normalization techniques also increased the network size and optimizer as adam. After Parameter tuning , we have achieved the accuracy of 94%,92% and 96% using Resnet,Dense and Unet respectively. All three models are performing well , compared to others U-Net models have outperformed.